History and Evolution of IBM Mainframes 2014 for GSE Lahnstein

advertisement
IBM
System z Enterprise News
The Future Runs on System z
Roland Trauner
trauner@de.ibm.com
Xing, Linkedin, Facebook, Twitter,
IBM Greenhouse, Threema
1
© 2012 IBM Corporation
Roland Trauner
trauner@de.ibm.com
Special thanks to:
Svend Erik Bach
IBM Distinguished Engineer, retired
S/360 - Announced April 7, 1964
IBM Mainframes (EDPMs)…
The first 50 years of Evolution and Innovation
EDPMs - Electronic Data Processing Machines
© 2014 SEB
During the 1950s, Data Processing came of age…
Ready to be announced as the new
8000 high-end mainframe in early 1960s
IBM 7090/7094
Designed for specific applications - mainly scientific/computational
Every family had a different, incompatible architecture
Even within families, moving to next larger system was a migration IBM 1401
all transistorized
RESULT – customers were getting frustrated with migration cost
RAMAC - Disk
IBM 701
Machines
(*) Common
migration easier - COBOL and FORTRAN
(*) Common compilers made migration
easiercompilers
- COBOLmade
and FORTRAN
3
© 2014 SEB
INNOVATION, S/360 happened, because it was the right time…
IBM decided in 1961 to drop the announcement of a new (8000) system and
using a radical new approach building a “total cohesive New Product Line”
IBM invested $5B (initial estimate less than $1B) over 4 years to develop…
–
–
–
–
–
a family of 5 increasingly powerful computers (LARGEST = 2/300 x SMALLEST)
compatible => with the same architecture
for all types of applications
running the same operating system (“never” done before)
ALL using the SAME new 44 peripheral devices
Solid Logic Technology (SLT) - leading edge (key)
Magnetic Core Memory - very reliable
Use combinations of Microprogram code and HW
Semi-Integrated
(hybrid technology)
to implement different capacity levels at a realistic cost
Emulators + Microcode in “Read Only” Control Memory
- IBM 1400, 7080, 7090 systems (“by flip of a switch)
- faster than on native systems - good price/performance
4
© 2014 SEB
System/360 – Announced April 7, 1964
The 360 in the name referred to “All DEGREES IN A COMPAS”
IBMs 5.000.000.000 $ Gamble…
Thomas Watson, Jr.
Chairman and CEO, IBM
Never again customers will have to change
because of us…
Protection of investments….
5
© 2014 SEB
The SPREAD report – Commitment to Change…
Commitment to change
The title of the original press release from launch was
“IBM System/360 Combines New Microelectronic Circuits
with Advanced Concepts in Computer Organization.”
Expected Lifespan in 1964 was 25 years…..!!
6
© 2014 SEB
Value Proposition - Technology
Value Proposition = Make a promise
NO CHANGE OF PROGRAMS AND DATA in the future
1st line of code written in ”1964” is still running….
Opposition (argument) to S/360….
”It is insane to throw everything away and build something new. (We did it again in 1994 !)
WHY done in 1964 ?
”The time required it”
”We had the Capacity and Ability to make the change….
”A simple consideration…”Clients come first….”
”A (7070) customer - willing to pay (trade-off) Performance for Compatibility”
Industrial Revolution – Power the Body….
Computer Revolution – Power the Mind…. (the S/360 architecture was a must for that…)
Erich Bloch, Bob Evans, Fred Brooks Jr.….leading group.. (Amdahl – architect)
•Wafer= 1000 circuits, but Low Yield - at least 50% thrown away => Not mature YET
•IBM decided to go with SEMI-Integrated Circuits
- slice/cut circuits out
- mount ”sliced” circuits on ceramic “print card” (quite a job…)
st
•1 commercially available data processing system based on micro-miniaturized
computer circuits. (transistors around 0,7 mm in size)
•Other companies trying “to go with” fully integrated ”wafers” failed significantly
7
© 2014 SEB
The S/360 Principles of Operation – S/360 POP
Separates Architecture from Implementation
SAME Instruction Sets (standard & optional)
across all systems - may be implemented differently - HW and/or Microcode
UPWARDS Binary Program Compatibility - (and some downwards)
Same Addressing scheme - 24 bit
(32bit architecture)
I/O Subsystem off-loading CPU resource
Channels (specialized processors) to move data between IO-devices & Memory
Separates CPU processing and I/O Operation – Asynchronously data move
…plus SAME STANDARDIZED I/O Interface on systems and IO devices
Unique Interrupt structure
I/O, Program, Supervisor Call, External, Machine Check etc..
Storage
keys - Supervisor
State - Isolation
1964 Protection
1970s
1980s & Program
1990s
20XX’s
Assumption: HW/SW Systems may/will fail - Unique Reliability and Availability
Expandable in future
according to
Technological Capabilities & Market requirements
8
© 2014 SEB
I/O Subsystem – implementation extended over time
“SAME” independent structure from 1964 until today…
Unique Capability
“All” I/O handled outside the “application” engines
Interrupt structure - Connect - Disconnect
CP
U
channel
channel
CPU1
CPU2
…
CPUn
D
E
V
C
U
CHANNE
L
SUB
SYSTEM
channel
channel
..
channel
channel
C
U
DE
V
ALL I/O interfaces may be SHARED
I/O bandwidth exceeds 300 GB/sec via 1024 channels (I/O interfaces)
ALL I/O devices may be accessed via up to 8 I/O interfaces
I/O may be initiated on one interface & execute over other interfaces
Huge I/O Configurations - Concurrency in Access to same storage device
ESCON
FICON
may run/interleave a high number of concurrent I/O operations
FCP
1, 2, 10,… Gigabit Networks
Crypto adapters - SSL/Secure Crypto
Coupling Links (up to2GB/sec)
PCI-E
9
© 2014 SEB
OS/360 - Operating Systems + ”Job Control Language” (JCL)
GOAL - ONE Operating System (OS)
- multiprogramming / variable number of concurrent task / variable size
- multiprocessing
Reality - Complex to implement vision & the OS needed to be able to deal…
with limited amount of REAL memory (minimum 8KB / maximum 8MB)
Basic (BOS - 8KB), TAPE (TOS - 16KB), DASD (DOS more than 16KB) and OS/360-versions
Typical memory size for large systems became over time around 512KB to 1MB
16MB
16MB
16MB
NO Fixed Limits
Risk for
fragmentation
1967
1MB
No memory
relocation HW
768KB
Fixed
Limits
128KB
PCP
0
Single Task
Intermediate OS
10
Batch
MFT
0
Multiple Fixed Tasks
Initial: 4 Partitions+OS
MVT
+ Online
+ Interactive Work
0
Multiple Variable Tasks
Initial: 15 tasks + OS
Time Sharing Option - TSO
BSAM, QSAM, ISAM, BDAM, BPAM, BPAM
Telecommunication - BTAM, TCAM
© 2014 SEB
SW - Job Control Language …etc…
So much (with S/360) was done right,
but this thing (JCL) was messed up with superb competence…..
•More assembler like than PL/I like….
•Card column dependent
•No clear branching and subroutine call
•Not seen as a TOOL for SCHEDULING, but more like a few control cards
•NOT designed, - it JUST GREW – (wild….)
Other comments…..
•Compilers for 6 languages – Assembler, Fortran, PL/I, Cobol, Basic, RPG…
•S/360 is INITIALLY a BATCH processing system through the history of OS/360
(BOS, TOS and DOS plus PCP , MFT and MVT)
•Becomes an ONLINE / Time Sharing System with Virtual Storage in the 1970ties…
11
© 2014 SEB
Innovations: Virtual Storage + CP/67 & CMS + MP + Supercomputer….
S/360 Model 67 - first IBM system with Virtual Storage and
Multi Processor (MP) capabilities
– Control Program/67 (CP/67) with
the Cambridge Monitor System (CMS)
The “unofficial” operating system
from the IBM Cambridge Scientific Center
(1st “version of Virtual Machine (VM)….)
S/360-67
– Time Sharing System (TSS) the “official” time-sharing OS from IBM “Dynamic Addressing Translation” (DAT)
It did NEVER “fly”…
NASA….Goddard Space Flight Center used
the S/360 model 95 supercomputers…
Run by an IBM System/360,
a massive turret lathe
shapes a heat shield
for an Apollo spacecraft
12
© 2014 SEB
The 1970ties…the architecture matures and expands
Fully integrated (MONOLITHIC) memory circuits – with S/370 Model 145
– 1,400+ circuit elements per monolithic chip => 174 complete memory circuits
Virtual Storage (memory) announced in 1972 - with S/370 Model 148
– 2KB / 4KB pages of memory - 64KB / 1MB segment sizes
– HW Dynamic Address Translation (DAT) of Virtual to Real addresses
– HW ISOLATION of Program Addressing Areas = Address Spaces
– VM/370 announced to assist migration from OS/360 to OS/VS (VS1, SVS, MVS)
Workload Management - System Resource Manager (SRM)
– BUSINESS NEED - DRIVEN by MIXED Workload – Tx/DB-Online, Time Sharing, Batch….
– Priority (CPU+IO), Working SETs, Paging Rate, Response time (TSO, IMS, CICS,..)
Channel Subsystem - 256 Channels
Extended MP support (158MP/168MP)
Operating systems – MVS, VS1, VM/370
– OS/360
PCP
MFT
MVT
– DOS/360
DOS/VS
VS1
– CP/67
VM/370
SVS
Water-Cooling
MVS (VSAM, VTAM)
JES2 (HASP) & JES3 (ASP) – a religious battle
SMP/E - Install & Maintenance MVS,
– FMIDs, SYSMODS, USERMODS, HOLD, pre-req, co-req,…
IBM 3033
13
© 2014 SEB
The 1970ties…the architecture matures and expands
S/370 architecture with Virtual Storage - August 1972 (S/370 Model 148)
Dynamic Address Translation - DAT
Integrated Memory Chips
4KB PAGES and 64KB SEGMENT sizes - (optional 2KB/1MB)
BASE technology for…..
Real Time Online Tx and DB Systems like CICS & IMS (DL/I)
Interactive Work like TSO (and VM – CMS and test of MVS)
Network and IO access methods - BTAM, TCAM, VTAM and VSAM
Authorized Program Facility
System Resource Manager
Priority, Working Sets, RT,…
Essential Driver for Programmer Productivity
Virtual Memory
16MB
Multiple 16MB Address Spaces
16MB
Storage Protect Key 0 - 7
Shared Virtual Area (Common)
HW Isolated Address Spaces
16 Storage Protect Keys 0 - 15
Real
Memory
SVS
Multiprocessing
Performance & Advanced Recovery
4MB
0
0
0
MVS Base Control Program
Storage Protect Key 0 - 7
Single Virtual System
Multiple Virtual Systems – 1974
(interim release – waiting for MVS)
(old programs ran unchanged even without recompilation)
Storage Protect keys - Protect System Code from Middleware from Applications – key, store, fetch
14
© 2014 SEB
NEW(*1) Technology in 1980 - Growth & Reliability
up to 133 chips, 704 circuits/chip
28-33 wired ceramic layers
350.000 Holes -> vertical wires
16.000 chip contact points
Extremely Reliable
IBM 3081 introduced new TECHNOLOGY
Base technology for ALL systems up to TODAY
Important for Availability & Dynamic Scalability
Thermal Conduction Modules (TCMs)
–Very efficient WATER cooling technology
–Ceramic Multilayer with mounted chips
Processor Controller – Service Console
–Analyze and LOG
16/19 TCMs to build a UNI (370.000+ circuits), around 2000 chips
–Call Home - Remote Support
54/56 TCMs to build a 4way
System Programmer out of the Machine Room
–Console up to 1500 meters away
N-way support matures
S/370-XA Channel Design - CHPIDS & Subchannels
Important New(*1) peripherals
3083 1WAY
–3800 Laser page printer
3081 2WAY
–3850 Automatic Magnetic Library
3084 4WAY
–3380 disk (4way access - Dynamic Reconnect)
–New DASD architecture –
ES/9000-600 6WAY
Logical Structure independent of Physical Structure
(*1) Spin off of the Future Systems (FS) project during 1971 to 1975 , - like S/38, AS/400 and RISC architecture
15
© 2014 SEB
Extended Addressing Architecture in the 1980ties
S/370 Extended Architecture (XA) - 1981 (shipped in 1983)
– 24-bit (16MB) or 31-bit addressing (2GB)
– Bi-modal execution - both types programs may co-execute simultaneously
– Real & Virtual Memory addressing
Virtual Storage Constraint
Growing System/Applications functional code
More Data in memory
Programmer productivity
Selling Virtual Storage !
2GB
Private above 16MB Line
COMMON above 16MB Line
16MB-Line
16MB
COMMON below 16MB Line
BECOMES
more and more
squeezed
Private below 16MB Line
MVS/370
16
0
MVS/XA
Protected System Area (HSA)
© 2014 SEB
ESA/370 - Enterprise Systems Architecture for S/370-XA , 1985
To “relieve” pressure on central memory – business need…
Expanded Storage (Memory) - layer between Memory and DASD
–
–
–
–
–
used as an Extremely Fast paging/swapping device
Saves CPU and reduces response times (Sub-Second RT)
IMPORTANT base for DATA-in-MEMORY
Performance sensitive Data Areas
Hiperspaces (Buffer pools in Expanded Storage)
Nano-seconds
CP
CP
Micro-seconds
Multiple Micro-seconds
CENTRAL
STORAGE
EXPANDED
STORAGE
Multiple Milli-seconds
PAGING/SWAP IO
DASD
STORAGE
17
© 2014 SEB
ESA/370 - Enterprise Systems Architecture for S/370-ESA, 1988
Extended Execution and (Data) Addressing architecture …..
Cross Memory (XM) and Data Spaces
–
–
–
–
Programs may execute instructions out of MULTIPLE Address Spaces
Data Spaces – Data-in-Memory (Data Bases, and other large data structures)
VSCR - Expanding the addressing area significantly (multiple 2GB SPACES)
Tighter integration between Multiple Transaction and Database Systems
TX system
DB system
Buffer Pools…
The strenght of XM
•Integration
•Data-in-memory
•Sub-Second RT
ONLY
DATA
ADDRESS
SPACE
ADDRESS
SPACE
DATA SPACES
(HIPERSPACES in Expanded Stg.)
18
HW Controlled Access
© 2014 SEB
Virtualization of Processor Systems in the 1980ties
Start Interpretive Execution (SIE) facility in 1980
Gives a “GUEST” full control over the processor HW. Initial used by VM/XA , then by PR/SM
Processor Resource/Systems Manager (PR/SM) in 1987 establish
Multiple Operating Systems on same HW
Multiple Architectures on same HW
HW Isolation
LOGICAL Partitions - LPAR’s - initial 10 - now 60
Sharing of CPU at % level
Dynamic Adjustable - Guaranteed
LPAR’s may ABSORB excess
capacity from other LPARs…
(use of SHORT / LONG Peaks and Valleys)
Memory & Channels - distributed
Dynamic Re-Allocation
Security Control
1990ties and on…
SHARING of channels and other
similar I/O resources
SIE Instruction
Gives GUEST full control over HW
SIE under SIE possible (VM)
SIE Assist microcode
Dynamic Re-Allocation of Resources
among LPARs under Workload Mgr control
According to BUSINESS POLICIES
19
© 2014 SEB
System z mainframe virtualization - not an “add-on”, but a “built-in”
Virtualization in MULTIPLE Dimensions
1st Dimension
100’s – 1000’s
Very Large
Shared Resource Space
“2” Dimensions of Virtualization
Virtual LINUX
servers
Allows for consolidation
and tight integration of
2nd Dimension
Up to 60
LPARs
Multiple
CORE Business Applications
together with
Large Server Farms
VIRTUAL Server Racks
(Virtual Racks)
PR/SM
VM
on the same footprint
with
HW Enforced Isolation
‘
HW Hypervisor
uses SIE
High Speed VIRTUAL networks
Low latency interconnect
SW Hypervisor
uses SIE
ABLE to ABSORB PEAKS
for LARGE WEB-networks with
VARIABLE and UNPREDICTABLE
loads
Virtualization is transparent for OS/Application execution and IO operation
20
© 2014 SEB
ESA/370 - Enterprise Systems Architecture for S/370-XA - 1989
Basic Sysplex – Simplification of complex infrastructures
Systems may be independent systems,
LPAR’s or combinations
Sysplex Timer
RING Connection
Signaling Messages
• Common Consoles
• Sharing of Basic Resources across systems
• Common security definitions/control
• Limited value for DATABASE SHARING
21
© 2014 SEB
Breaking Down the Glass House – late 80ties/early 90ties
Growing focus on Implementation of Industry Standards
– POSIX – UNIX API’s made available as a general integrated API
– TCP/IP in co-existence with VTAM
ESCON – Extension of I/O architecture
– GLASS FIBER technology - Higher Speeds and 10/17 KM distances
– EMIF – ”VIRTUALIZATION” of CHANNELS
– 1st step towards Fiber Channel Protocol (Open Standard)
PROD
1
PROD
2
TEST
PROD
1
DEV
PROD
2
TEST
DEV
LOGICAL
CHANNELS
CHANNEL SUBSYSTEM
EMIF
CHANNEL SUBSYSTEM
Cost Savings
Simplicity
Operation “unchanged”
Planning & Insight
PHYSICAL
CHANNELS and Cables
CU
INTERFACES
CONTROL UNIT (CU)
22
CONTROL UNIT (CU)
© 2014 SEB
Late 1980ties and early 1990s - "The Mainframe is dead“
Down-sized PC’s & Mini’s running UNIX and Windows are running the future
Client/Server based I/T infrastructure
… so said the IT Industry Observers
S/390 is dead or terminally ill
–the aging user base is deserting the MVS ship and moving to UNIX or NT
S/390 is expensive to buy and expensive to run
–UNIX or Wintel is many times cheaper
S/390 requires a huge Datacenter/Glasshouse
–big machines, water cooling and huge electricity bills
S/390 cannot run new eBusiness or ERP workloads
–it only runs Batch or Green-screen type work
H6
9021-9X2 - 465 MIPS in 1994
Water Cooling equipment
as used 1970s to 1990s
.... but that was before IBM “Right-sized the Mainframe"
23
© 2014 SEB
MAJOR change of technology & architecture in the 90ties
CMOS technology in 1994 – moves from 6way to 16way during the 90ties
– S/390 Parallel Transaction Server, 15MIPS to 65 MIPS (1 to 6way)
– 1996/7 - System/390 G3/G4 => capacity exceeds than last IBM H6-bipolar
– 2000 - z900 (1 to 16WAY) => capacity higher than PCM Bipolar (+ CMOS)
Parallel Sysplex - Coupling of Systems / LPARs (initial up to 16, later 32)
– Multiple systems “BEHAVES” like ONE
– Coupling Facility – Intelligent memory
Data Compression in HW – major savings in DASD, Memory…
OS/390 (1995) - Integrated Testing, Packaging, Ordering of the key, entitled SW elements needed
to complete a fully functional MVS Operating System package.
9672
ES/9000
Bipolar
24
H6 (1993)
65MIPS engine
10way - 465 MIPS
CMOS
15MIPS engine
6way - 65MIPS
CF
Parallel Sysplex
Shared Memory
up to 32 systems
© 2014 SEB
Why CMOS – technology trends, environmental, price
G3/G4-CMOS equal to H6-capacity 1996/7 & exceeds PCM-Bipolar by 2000
UNI-capacity
65 MIPS (465 MIPS)
3 MIPS/KWatt
up to 6000 circuits/chip
400 chip’s/CP
(H6) 9021-711
)
l ed
oo
3090E
168/2mips
9370-50
1970
77
15 MIPS (65 MIPS)
60 MIPS/KWatt (>1500 today)
400.000 circuits/chip
4 chip’s/CP
CM
OS
lar
3083B
po
Bi
3033/5mips
9672-R1
9221-211
d)
r -c
ate
(W
co
ole
708 circuits/chip
(A
i r-
Logarithmic scale
80
85
88
93 94
Double the capacity
Bipolar - avg. 5 years
CMOS - 1 to 2 years
a Bi-Polar system (H7) was ready for announce in 1994, but was cancelled - $2B Write-Off
”Inspiration” from the S/360 decision in 1961…
25
© 2014 SEB
Some environmental info, which may be interesting….
Technology
Total no. of chips
Total no. of parts
Weight (lb)
Power requirement (kVA)
Chips per processor
Maximum memory (GB)
Space (sq ft)
26
ES/9000* 9X2
S/390* G6
Bipolar
CMOS
5000
6659
31
92
31,145
2,057
153
5.5
390
1
10
32
671.6
51.9
© 2014 SEB
zEC12 Continues the CMOS Mainframe Heritage Begun in 1994
5.5 GHz
5.2 GHz
6000
6000
4.4 GHz
5000
5000
MHz/GHz
4000
4000
3000
3000
2000
2000
1.7 GHz
1.2 GHz
770 MHz
1000
1000
00
2000
z900
z900
189 nm SOI
16 Cores
Full 64-bit
z/Architecture
2003
z990
2005
z9 EC
z990
z9ec
130 nm SOI
32 Cores
Superscalar
Modular SMP
90 nm SOI
54 Cores
System level
scaling
2008
z10 EC
z10ec
65 nm SOI
64 Cores
High-freq core
3-level cache
2010
z196
z196
45 nm SOI
80 Cores
OOO core
eDRAM cache
RAIM memory
zBX integration
2012
zEC12
zxxx
32 nm SOI
101 Cores
OOO and eDRAM
cache improvements
PCIe Flash
Arch extensions
for scaling
© 2012 IBM Corporation
IBM zEnterprise EC12 and zBC12 – September 2013
IBM zEnterprise EC12
(2827)
• Announced 08/12 – Server w/ up to 101 PU cores
• 5 models – Up to 101-way
• Granular Offerings for up to 20 CPs
• PU (Engine) Characterization
• CP, SAP, IFL, ICF, zAAP, zIIP, IFP
• On Demand Capabilities
• CoD, CIU, CBU, On/Off CoD, CPE, FoD
• Memory – up to 3 TB for Server and
up to 1 TB per LPAR
• 32 GB Fixed HSA
• Channels
•
•
•
•
•
•
•
•
•
•
PCIe bus
Four LCSSs
3 Subchannel Sets
FICON Express8 and 8S
zHPF
OSA 10 GbE, GbE, 1000BASE-T
InfiniBand Coupling Links
Flash Express
zEDC Express
10GbE RoCE Express
• Configurable Crypto Express4S
• Parallel Sysplex clustering
• HiperSockets – up to 32
• Up to 60 logical partitions
• Enhanced Availability
• IBM zAware
• Unified Resource Manager
• Operating Systems
IBM zEnterprise BladeCenter
Extension (2458)
• First Announced 7/10
• Model 003 for zEC12 – 08/12
• zBX Racks with:
•
•
•
•
•
•
•
BladeCenter Chassis
N + 1 components
Blades
Top of Rack Switches
8 Gb FC Switches
Power Units
Advance Management Modules
• Up to 112 Blades
• POWER7 Blades
• IBM System x Blades
• IBM WebSphere DataPower Integration
Appliance XI50 for zEnterprise (M/T
2462-4BX)
• Operating Systems
• AIX 5.3 and higher
• Linux for Select IBM x Blades
• Microsoft Windows for x Blades
• Hypervisors
• PowerVM Enterprise Edition
• Integrated Hypervisor for System x
IBM zBC12
(2828)
• Announced 07/13
• 2 models – H06 and H13
‒ Up to 6 CPs
• High levels of Granularity available
• 156 Capacity Indicators
• PU (Engine) Characterization
• CP, SAP, IFL, ICF, zAAP, zIIP, IFP
• On Demand Capabilities
• CoD, CIU, CBU, On/Off CoD. CPE
• Memory – up to 512 GB for Server
• 16 GB Fixed HSA
• Channels
• PCIe bus
• Two LCSSs
• 2 Subchannel Sets
• FICON Express8 and 8S
• zHPF
• OSA 10 GbE, GbE, 1000BASE-T
• InfiniBand Coupling Links
• Flash Express
• zEDC
• 10GbE RoCE Express
• Configurable Crypto Express 4S
• Parallel Sysplex clustering
• HiperSockets – up to 32
• Up to 30 logical partitions
• IBM zAware
• Unified Resource Manager
• Operating Systems
• z/OS, z/VM, z/VSE, z/TPF, Linux on System z
• z/OS, z/VM, z/VSE, z/TPF, Linux on System z
© 2014 IBM Corporation
Realität 2014:
IBM zEnterprise EC12
(2827)
IBM zEnterprise BladeCenter
Extension (2458)
IBM zBC12
(2828)
© 2014 IBM Corporation
Networking in zEnterprise between z/OS and Linux on System z
HIPERSOCKET Bridge – Network HA Overview
A HiperSockets channel is an intra-CEC communications path.
The HiperSockets Bridge provides inter-CEC Hipersockets LAN connection
to combine many HiperSockets LANs into a Layer 2 broadcast domain. An ideal
function for the SSI environment.
Layer2 Broadcast
domain
HIPERSOCKET
HiperS
o
Bridge ckets
Functi
on
HIPERSOCKET
HiperS
o
Bridge ckets
Functi
on
HIPERSOCKET
The HiperSockets Bridge function is available in z/VM 6.2
with a z114, z196 or zBC12, zEC12 processor. Must be sure to have z/VM
and processor maintenance levels..
30
© 2014 IBM Corporation
Networking in zEnterprise between z/OS and Linux on System z
System z Network alternatives between z/VM LPARs
z/OS or z/VSE
Linux on System z
IP Application
Application
TCP/IP
TCP/IP
3
z/VM LPAR1
Firmware
z/VM LPAR1
VSWITCH
VSWITCH
Hipersockets
OSAX
LPAR1 (CP)
1
2
LPAR2 (IFL)
External
network
© 2014 IBM Corporation
Networking in zEnterprise between z/OS and Linux on System z
Network bandwidth enhancement
and Automated Failover
Resource Virtualization:
OSA Channel Bonding in Linux
Linux bonding driver enslaves multiple OSA connections to
create a single logical network interface card (NIC)
Detects loss of NIC connectivity and automatically fails over
to surviving NIC
Active/backup & aggregation modes
Separately configured for each Linux
© 2014 IBM Corporation
Network Virtualization:
z/VM Port aggregation
z/VM VSWITCH enslaves multiple OSA connections. Creates
virtual NICs for each Linux guest
Detects loss of physical NIC connectivity and automatically fails
over to surviving NIC
Active/backup & aggregation modes
Centralized configuration benefits all guests
Networking in zEnterprise between z/OS and Linux on System z
Optimize server to server networking – transparently – zEC12+zBC12
“HiperSockets™-like” capability across physical systems
Up to 50% CPU savings
for FTP file transfers
across z/OS systems
versus standard TCP/IP **
zBC12
zEC12
Up to 48% reduction in
response time and
Shared Memory Communications (SMC-R):
Exploit RDMA (Remote Direct Memory Access) over
Converged Ethernet (RoCE) with qualities of service
support for dynamic failover to redundant hardware
10% CPU savings for a
sample CICS workload
exploiting IPIC using
SMC-R versus TCP/IP ***
Typical Client Use Cases:
Help to reduce both latency and CPU resource consumption over
traditional TCP/IP for communications across z/OS systems
Any z/OS TCP sockets based workload can seamlessly use
SMC-R without requiring any application changes
Up to 40% reduction in
overall transaction
response time for WAS
workload accessing
z/OS DB2 ****
Up to 3X increase in
WebSphere MQ messages
delivered across
z/OS systems *****
z/OS V2.1
SMC-R
z/VM V6.3 support
for guests
10GbE RoCE
Express
** Based on internal IBM benchmarks in a controlled environment using z/OS V2R1 Communications Server FTP client and FTP server, transferring a 1.2GB binary file using SMC-R (10GbE RoCE Express feature) vs standard TCP/IP (10GbE
OSA Express4 feature). The actual CPU savings any user will experience may vary.
*** Based on internal IBM benchmarks using a modeled CICS workload driving a CICS transaction that performs 5 DPL (Distributed Program Link) calls to a CICS region on a remote z/OS system via CICS IP interconnectivity (IPIC), using 32K
input/output containers. Response times and CPU savings measured on z/OS system initiating the DPL calls. The actual response times and CPU savings any user will experience will vary.
**** Based on projections and measurements completed in a controlled environment. Results may vary by customer based on individual workload, configuration and software levels.
***** Based on internal IBM benchmarks using a modeled WebSphere MQ for z/OS workload driving non-persistent messages across z/OS systems in a request/response pattern. The benchmarks included various data sizes and number of
channel pairs The actual throughput
andIBM
CPU Corporation
savings users will experience may vary based on the user workload and configuration.
© 2014
Networking in zEnterprise between z/OS and Linux on System z
Sysplex Distributor before RoCE
z/OS TCP/IP
Client Stack
OSA
Ethernet
OSA
SD VIPA
CPC
z/OS TCP/IP
Sysplex Distributor Stack
OSA
z/OS TCP/IP
Target Stack
XCF
z/OS TCP/IP
Target Stack
Traditional Sysplex Distributor
– All traffic from the client to the target application goes through the Sysplex
Distributor TCP/IP stack
– All traffic from the target application goes directly back to the client using
the TCP/IP routing table on the target TCP/IP stack.
© 2014 IBM Corporation
Networking in zEnterprise between z/OS and Linux on System z
Sysplex Distributor after RoCE
z/OS TCP/IP
Client Stack
OSA
OSA
Ethernet
SD VIPA
z/OS TCP/IP
CPC
Sysplex Distributor Stack
RoCE
OSA
RoCE
z/OS TCP/IP
Target Stack
XCF
z/OS TCP/IP
Target Stack
RoCE Sysplex Distributor
– The initial connection request goes through the Sysplex Distributor stack.
– The session then flows directly between the client and the target over the
RoCE features.
Note: As with all RoCE communication the session end also flows over the OSAs.
© 2014 IBM Corporation
Networking in zEnterprise between z/OS and Linux on System z
Impact of SMC-R on real z/OS workloads – early benchmark results
WebSphere to DB2 communications using SMC-R
40% reduction in overall
transaction response time
for WebSphere 8.0 Liberty profile TradeLite
workload accessing z/OS DB2 in another system
measured in internal benchmarks *
Linux on x
Workload Client
Simulator
(JIBE)
z/OS
SYSA
TCP/IP
HTTP/REST
WAS
Liberty
TradeLite
SMC-R
z/OS
SYSB
JDBC/DRDA
RoCE
DB2
File Transfers (FTP) using SMC-R
z/OS
SYSA
FTP Client
SMC-R
FTP
z/OS
SYSB
Up to 50% CPU savings for FTP binary file
transfers across z/OS systems when using
SMC-R vs standard TCP/IP **
FTP Server
RoCE
CICS to CICS IP Intercommunications (IPIC) using SMC-R
Up to 48% reduction in response
time and up to 10% CPU savings for
CICS transactions using DPL (Distributed Program Link) to invoke
programs in remote CICS regions in another z/OS system via
CICS IP interconnectivity (IPIC) when using SMC-R vs standard
TCP/IP ***
z/OS
SYSA
CICS A
DPL calls
SMC-R
IPIC
RoCE
z/OS
SYSB
CICS B
Program X
* Based on projections and measurements completed in a controlled environment. Results may vary by customer based on individual workload, configuration and software levels.
** Based on internal IBM benchmarks in a controlled environment using z/OS V2R1 Communications Server FTP client and FTP server, transferring a 1.2GB binary file using SMC-R (10GbE
RoCE Express feature) vs standard TCP/IP (10GbE OSA Express4 feature). The actual CPU savings any user will experience may vary.
*** Based on internal IBM benchmarks using a modeled CICS workload driving a CICS transaction that performs 5 DPL calls to a CICS region on a remote z/OS system, using 32K input/output
containers. Response times and CPU savings measured on z/OS system initiating the DPL calls. The actual response times and CPU savings any user will experience will vary.
© 2014 IBM Corporation
System/360 – Announced April 7, 1964
Größer, schneller, besser am 7. April 2014 --- Pläne für den 7. April 2064…?
IBM zEnterprise EC12
(2827)
IBM zEnterprise BladeCenter
Extension (2458)
IBM zBC12
(2828)
© 2014 IBM Corporation
Thank You
38
© 2014 IBM Corporation
Download