2014 LENOVO INTERNAL. ALL RIGHTS RESERVED.
Agenda
 NeXtScale Overview
 NeXtScale Family
 Client Benefits
 Introducing IBM NeXtScale System M5
 M5 Enhancements
 Target Market Segments
 Messaging: Scale, Flexible, Simple
 NeXtScale with Water Cool Technology
 Timeline
“IBM delivered server hardware of exceptional performance and provided superior support,
allowing us to rapidly integrate the system into our open standards based research infrastructure.
Complementing the technical excellence of NeXtScale System, IBM has a long track record in
creating high-performance computing solutions that gives us confidence in its capabilities...”
—Paul R Brenner,
Associate Director, HPC
The University of Notre Dame, Indiana
“In my 20 years of working with supercomputers, I’ve never had so few failures out of the box. The
NeXtScale nodes were solid from the first moment we turned them on..
—Patricia Kovatch,
Associate Dean, Scientific Computing,
Icahn School of Medicine at Mount Sinai
 Solutions
“Hartree Centre needed a powerful, flexible server system that could drive research in energy
efficiency as well as economic impact for its clients. By extending its IBM System x platform with
IBM NeXtScale System, Hartree Centre can now move to exascale computing, support sustainable
energy use and help its clients gain a competitive advantage..”
—Prof. Adrian Wander,
Director of the Scientific
Computing Department, Hartree Centre
2
Introducing IBM NeXtScale System M5
Modular, high-performance system for scale-out computing
Standard Rack
Chassis
Primary Workloads
High
Performance
Computing
Compute

Low Cost Chassis provides only power
and cooling

Dense High Performance Server

Dense Storage Tray (8 X 3.5” HDDs)

Dense PCI Tray (2 x 300W GPU/Phi)

Standard 19” Racks

Top of Rack switching and choice of fabric

Open Standards Based tool kit for
deployment and management
Storage*
Acceleration
3
* M5 support to be available with 12Gb version at Refresh 1
Deliver Insight Faster
Efficient
Reliable
Secure
NeXtScale System provides the scale, flexibility and simplicity to help
clients solve problems faster
Scale
Smart delivery of scale
yields better economics and
greater impact per $$
Significant CAPEX and
OPEX savings while
conserving energy
4
Flexible
Simple
Create a system tailored to
precisely meet your need now
Drive out complexity with a
single architecture
Provides the ability to adapt
rapidly to new needs and new
technology
Rapid provisioning,
easy to manage,
seamless growth
2014 LENOVO INTERNAL. ALL RIGHTS RESERVED.
New Compute Node fits into existing NeXtScale infrastructure
One Architecture Optimized for Many Use Cases
Chassis
NeXtScale n1200 Enclosure
Air or Water Cool Technology
New
Compute node
IBM NeXtScale nx360 M5





Dense Compute
Top Performance
Energy Efficient
Air or Water Cool Technology
Investment Protection
Storage NeX node*
nx360 M5 + Storage NeX





Add RAID card + cable
Dense 32TB in 1U
Up to 8 x 3.5” HDDS
Simple direct connect
Mix and Match
6
*M5 support to be available with 12Gb version at Refresh 1
PCI Nex node (GPU / Phi)
nx360 M5 + PCI NeX




Add PCI riser + GPUs
2 x 300W GPU in 1U
Full x16 Gen3 connect
Mix and Match
NeXtScale System M5 Enhancements
Incorporates Broad Customer Requirements
What’s New
- 50% more cores and up to 39% faster compute performance* with Intel
Xeon E5-2600 v3 processors (up to 18 core)
- Double the memory capacity with 16 DIMM slots (2133MHz DDR4 up to
32GB)
- Double the storage capacity with 4x 2.5” drives
- Hot swap HDD option
- New RAID slot in rear provides greater PCI flexibility
- x16 Gen3 ML2 slot supports InfiniBand / Ethernet adapters for increased
configuration flexibility at lower price (increase from x8)
- Choice of air or water cool
- Investment protection – chassis supports M4 and M5
Key Market Segments
- HPC, Technical Computing, Grid, Cloud, Analytics, Managed Service
Providers, Scale-out datacenters
- Direct and Business Partner enabled solutions
39%
50%
Faster compute
performance1
More Cores2
2X
14%
Memory
Capacity3
Faster Memory4
All New
2X
Hot Swap
HDD5
Hard Drives5
Full
50%
Gen3 x16 ML26
More PCI Slots7
Choice of
Air or Water Cool
7
Target Segments - Key Requirements
Data Center
Infrastructure
Cloud Computing
Key Requirements:
• Mid-high bin EP processors
• Lots of memory (>256GB/node)
for virtualization
• 1Gb / 10Gb Ethernet
• 1-2 SS drives for boot
Data Analytics
Key Requirements:
• Mid-high bin EP processors
• Lots of memory (>256GB per node)
• 1Gb / 10Gb Ethernet
• 1-2 SS drives for boot
8
High Performance
Computing
Key Requirements:
• High bin EP processors for maximum
performance
• High performing memory
• InfiniBand
• 4 HDD capacity
• GPU support
Key Requirements:
• Low-bin processors (low cost)
• Smaller memory (low cost)
• 1Gb Ethernet
• 2 Hot Swap drives (reliability)
Virtual Desktop
Key Requirements:
• Lots of memory (> 256GB per
node) for virtualization
• GPU support
NeXtScale M5 addresses segment requirements
Data Center
Infrastructure
Cloud Computing
NeXtScale M5 provides:
• Intel EP (mid- to high bin)
• Up to 36 cores / node
• Up to 512 GB memory / node
• Ethernet (1 /10 Gb), PCIe, ML2
• Broad range of 3.5”, 2.5” HDDs
• 2 front hot-swap drives
Data Analytics
9
NeXtScale M5 provides:
• Intel EP (mid- to high bin)
• Up to 36 cores / node
• Up to 512 GB memory / node
• Ethernet (1 /10 Gb), PCIe, ML2
• Broad range of 3.5”, 2.5” HDDs
High Performance
Computing
NeXtScale M5 provides:
• Intel EP (high bin)
• Up to 36 cores per node
• Fast memory (2.1 GHz), 16 slots
• FDR ML2 InfiniBand, future EDR
• Broad range of 3.5’ HDDs
• 4 int. 2.5’ HDD, 2 hot swap
• Up to 2 x GPUs per 1u
NeXtScale M5 provides:
• Intel EP (low bin)
• Up to 36 cores / node
• Low cost 4 / 8 GB memory
• Onboard Gb Ethernet std.
• 2 front hot-swap drives (2.5”)
• Integrated RAID slot
Virtual Desktop
NeXtScale M5 provides:
• Choice of Processors
• Up to 512 GB memory
• Up to 2 x GPUs per 1U
IBM NeXtScale nx360 M5 – The Compute Node
System infrastructure
Simple architecture
nx360 M5 Server
10
 ½ Wide 1U, 2 socket server
 Intel E5-2600 v3 processors
(up to 18C)
 16x DIMM slots (DDR4,
2133MHz)
 2 Front Hot-Swap HDD option
(or std PCI slot)
 4 internal HDD capacity
 New, embedded RAID PCI slot
 ML2 mezzanine for x16 FDR
and Ethernet
 Native expansion (NeX)
support – storage and
GPU/Phi
IBM NeXtScale nx360 M5 Server
RAID
Slot
Drive bay(s)
x16 PCIe
3.0 slot
16x DIMMs
Dual-port ML2 x16
mezzanine card
(IB/Ethernet)
E5-2600 v3
CPU
x24 PCIe 3.0 slot
KVM
connector
1 GbE
ports
Power,
LEDs
Optional Hot
Swap HDD or
PCIe adapter
Supported on same chassis as M4 version
IBM NeXtScale nx360 M5 Server
4x 2.5” drives
supported per node
2 Hot-Swap SFF
HDDs or SSDs
Choice of:
Hot Swap
HDD Option
or
Std full-high, halflength PCIe 3.0 Slot
PCI slot
Option
Dual-port x16 ML2
mezzanine card
InfiniBand / Ethernet
11
KVM
Connector
1 GbE ports
Dedicated or
shared mgm’t
Power,
LEDs
Labeling tag
for system naming,
asset tagging
Investment Protection - Chassis supports M4 or M5 Nodes
IBM NeXtScale n1200 Enclosure
System infrastructure
Optimized shared infrastructure
n1200 Enclosure
12
 6U Chassis, 12 bays
 ½ wide component support
 Up to 6x 900W or 1300W
power supplies N+N or N+1
configurations
 Up to 10 hot swap fans
 Fan and Power Controller
 Mix and match compute,
storage, or GPU nodes
 No built in networking
 No chassis management
required
 Mix and match M4 and M5
air cool nodes¹
Bay 11
Bay 12
Bay 9
Bay 10
Bay 7
Bay 8
Bay 5
Bay 6
Bay 3
Bay 4
Bay 1
Bay 2
Front View
Rear View
3x power
supplies
5x 80mm
fans
Fan and
Power
Controller
3x power 5x 80mm
supplies fans
NeXtScale - Choice of Air or Water Cooling
IBM NeXtScale System
Air Cool




Air cooled, internal fans
Fits in any datacenter
Maximum flexibility
Broadest choice of configurable
options supported
 Supports Native Expansion nodes
 Storage NeX
 PCI NeX (GPU, Phi)
13
Water Cool Technology
Your
Choice








Innovative direct water cooling
No internal fans
Extremely energy efficient
Extremely quiet
Lower power
Dense, small footprint
Lower operational cost and TCO
Ideal for geographies with high electricity
costs or space constraints
IBM NeXtScale System with Water Cool Technology (WCT)
System infrastructure
Simple architecture
Water Cool Node & Chassis
14
 Full wide, 2-node compute tray
 6U Chassis, 6 bays (12
nodes/chassis)
 Manifolds deliver water directly
to nodes
 Water circulated through cooling
tubes for component level
cooling
 Intel E5-2600 v3 CPUs
 16x DDR4 DIMM slots
 InfiniBand FDR support (ML2 or
PCIe)
 6x 900W or 1300W PSU
 No fans except PSUs
 Drip sensor / Error LEDs
nx360 M5 WCT Compute Tray (2 nodes)
CPU
with
liquid
cooled
heatsink
Dual-port ML2
(IB/Ethernet)
Cooling
tubes
x16
DIMMs
1 GbE
ports
Labeling
tag
n1200 WCT Enclosure
6 full wide bays
12 compute nodes
1 GbE
ports
Power, PCI slot for
LEDs Connect IB
n1200 WCT Manifold
NeXtScale – Key Messages
SCALE
FLEXIBLE
Even a small cluster can
change the outcome
Single Architecture with
Native Expansion
Start at any size and
grow as you want
Built on Open Standards
Efficient at any scale
with choice of air or
water cooling
Maximum impact/$
Optimized stacks for
performance,
acceleration, and cloud
computing
15
Optimized for your data
center today and
tomorrow
Channel and box ship
capable
One part number
unlocks IBM’s service
and support
Flexible Storage and
Energy Management
SIMPLE
The back is now the
front—simplify
management and
deployment
Get in production faster
with Intelligent Cluster
Optimized shared
infrastructure without
compromising
performance
“Essentials only” design
NeXtScale – Key Messages
SCALE
Even a small cluster can
change the outcome
Start at any size and
grow as you want
Efficient at any scale
with choice of air or
water cooling
Maximum impact/$
Optimized stacks for
performance,
acceleration, and cloud
computing
16
Scale: The Power of Scale Delivers Benefits at Any Size
Even a small cluster can change the outcome
Make better decisions by
running larger, more
sophisticated models
Spot Trends Faster
and more effectively by
reducing total time to results
Manage Risk Better
by increasing accuracy and
visibility of models and datasets
Workstation Node(s)
Game Changing Results
Life Insurance Actuarial workbook
 1700 records that took 14 hours on a single workstation now takes 2.5
minutes on small cluster
 1 million records that took 7.5 days on 600 workstations now takes 2 hours
on a 3 rack cluster with only 150 nodes
17
Scale: Start at Any Size. Grow in any Increment.
Growing node by node?
Single nodes
and Chassis
 Available direct from IBM
 Optimized for availability through our partners
 Install the chassis today, grow into it tomorrow
Want to speed how quickly you can grow?
Configured
racks or chassis
Complete Clusters
and Containers
Single Nodes
18
 Shipped fully assembled
 Client driven, Choice Optimized.
 “Starter Packs” are Appliance easy. CTO flexible
Growing by leaps and bounds?
 NeXtScale can arrive ready to power on - ‘personality’ applied
 Racks at a time or complete infrastructure ready containers
Chassis or
Departmental Solutions
Rack(s)
Containers – ‘NeXtPods’
Scale: Achieve extreme scale with ultimate efficiency
NeXtScale System with Water Cool Technology
•
•
40%
More Energy Efficient
Data Center1
10%
More Power Efficient Server2
85%
Heat Recovery by Water
19
Requires no auxiliary cooling³
No chillers required due to warm water cooling
(up to 45 C)3
•
•
•
No Fans required for compute elements
Small power supply fans only
Lower operational costs, and quieter
•
•
Re-use warm water to heat other facilities
Run processors at higher frequencies (Turbo mode)
1. Based on comparisons between air-cooled IBM iDataPlex M4 server and water-cooled iDataPlex M4 servers
2. LRZ, a client in Germany DC numbers
3. Geography dependent
Water Cool Technology
Scale: Maximum impact per $. Per ft2. Per rack.
Race Car Design – performance and cost point
ahead of features / functions
 Top bin E5-2600 v3 processors
 Fast memory running at 2133Mhz
 Choice of SATA, SAS, or SSD on board
 Open ecosystem of high speed IO interconnects
Processing
50%
More High Frequency
Cores1
Memory Runs
14%
FASTER2
20
One nx360 with SSDs
delivers same IO perf as
Less weight per
system5
355
 More cores per floor tile
 Easy front access serviceability
 Choice of rack infrastructure
 Light weight + high performance can reduce floor loading
40%
CUSTOMER
BENEFITS
Hard Disks3
Power Savigns with
Platform LSF Energy
Aware4
15%
Maximize the capability of your data center
floor with dense and essential IT
80%
LESS racks per
solution6
2X
50%
More FLOPs per cycle
than a Xeon E-5 2600
v27
Less Servers required 9
1.1 TFLOP
Performance achieved
per server
2.7X
Increase in
Flops/Watt10
Scale: Platform Computing – complete, powerful, fully-supported
Applications
Workload and
Resource
Management
Big Data /
Hadoop
Social &
Mobile
Analytics
Platform LSF Family
Platform Symphony Family
Platform HPC
Batch, MPI workloads with
process mgmt, monitoring,
analytics, portal, license mgt
High throughput, near ‘real time’
parallel compute and Big Data /
MapReduce workloads
Simplified, integrated HPC mgmt
for batch, MPI workloads
integrated with systems
Data
Management
Elastic Storage based on General Parallel File System (GPFS)
Infrastructure
Management
Platform Cluster Manager Family
Heterogeneous
Resources
21
Simulation /
Modeling
High performance software defined storage
Provision and manage
Single Cluster (Standard) to Dynamic Clouds (Advanced)
Compute
Storage
Virtual, Physical, Desktop, Server, Cloud
Network
Scale: Performance Optimized Stack – From Hardware Up
Simulation /
Modeling
Applications
Workload and
Resource Mgmt
Global/Parallel
Filesystem
Application
Libraries
Operating
Systems
Bare Metal Management/
Provisioning/Monitoring
Platform LSF
Risk
Analysis
Adaptive Computing
Moab
Platform HPC
GPFS
Analytics
Lustre
Intel® Cluster Studio
OpenMPI
RHEL
NFS
MVAPICH2
SuSE
Windows
Ubuntu
Bare metal management/provisioning
Storage
Virtual, Physical, Desktop, Server, Cloud
22
Platform MPI
Extreme Cluster Administration Toollkit (xCAT)
Compute
Maui/Torque
Network
Scale: GPGPU Accelerator Optimized Stack – From Hardware Up
Applications
Workload and
Resource Mgmt
Global/Parallel
Filesystem
Application
Libraries
Life Sciences
Oil and Gas
Platform LSF
Adaptive Computing
Moab
Platform HPC
GPFS
Intel® Cluster Studio
Molecular
Dynamics
Finance
Lustre
CUDA
NFS
OpenCL
OpenGL
Operating
Systems
RHEL
Bare Metal Mgmt
and Provisioning
SuSE
Bare metal management/provisioning
Storage
Virtual, Physical, Desktop, Server, Cloud
23
Windows
Extreme Cluster Administration Toollkit (xCAT)
Compute
Maui/Torque
Network
Scale: Cloud Compute Optimized Stack – From Hardware Up
Public Cloud
Providers
Application
Cloud
Management
Solutions
Common Cloud
Stack
Private Cloud
MSP/CSP
IBM Cloud Manager with
OpenStack
OpenStack CE
For customers looking to deploy
complete Open Source Solutions
with little to no Enterprise features
.
Optimized with automation,
security, resource sharing and
monitoring over OpenStack CE
Customers who require optimized
utilization, multi-tenacy and
enahanced security
Common Cloud Management Platform
Provides Server, Storage and Network Integration, access to OpenStack APIs
Hypervisors
KVM, VMWare, Xen, Hyper-V
Bare Metal Mgmt
and Onboarding
Puppet, xCAT, Chef, SmartCloud Provisioning
Compute
Storage
Virtual, Physical, Desktop, Server, Cloud
24
SmartCloud Orchestrator
Network
NeXtScale – Key Messages
FLEXIBLE
Single Architecture with
Native Expansion
Built on Open Standards
Optimized for your data
center today and
tomorrow
Channel and box ship
capable
One part number
unlocks IBM’s service
and support
Flexible Storage and
Energy Management
25
Flexible: Native eXpansion – Adding Value, not Complexity
 Base node delivers robust and dense raw compute capabilities
 NeXtScale’s Native Expansion allows seamless upgrades of the base to add common functionalities
 All on a single architecture, with no need for exotic connectors or unique components
Graphics Acceleration / Co-processing
Storage
nx360 M5
nx360 M5
+
+
Storage NeX*
PCI NeX
+
+
RAID Card +
SAS Cable + HDDs
IBM NeXtScale nx360 M5
with Storage NeX
26
* M5 support to be available with 12Gb version at Refresh 1
GPU Riser Card
+ GPU/Phi
IBM NeXtScale nx360 M5
with Accelerator NeX
Flexible: Designed on Open Standards = Seamless Adoption
IBM ToolsCenter
• Consolidated, integrated suite of management
tools
• Powerful bootable media creator, FW updating
27
OpenStack Ready
• Deploy OpenStack with Chef or Puppet
• Mirantis Fuel, SuSE Cloud, IBM SmartCloud
UEFi and iMM
• Standards-based hardware that combines
diagnostic and remote control; No embedded SW
• Richer management experience and future-ready
IPMI 2.0 Compliant
• Use any IPMI compliant mgt. software – Puppet,
Avocent, IBM Director, iAMT, xCAT, etc.
• OpenIPMI, ipmitool, ipmiutils, FreeIPMI
compatible
xCAT
• Provides remote & unattended methods to
assist with Deploying, Updating, Configuring,
and Diagnosing
System Monitoring
• Friendly with open source tools like Ganglia,
Nagios, Zenoss, etc.
• Use with any RHEL/SuSE (and clones) or
Windows based tools.
Platform Computing
• Workloads managed seamlessly with Platform
LSF
• Deploy clusters easily with Platform HPC
SDN Friendly
• Networking direct to system; no integrated
proprietary switching
• Support for 1/10/40Gb, InfiniBand, FCoE, and
VFAs
Flexible: Optimized with your DataCenter in Mind – today and tomorrow
The Challenge:
 Package more into the data center without breaking
the clients’ standards
 Lower Power Costs all day long – peak usage times
and slow times
 Maximize energy efficiency in data center
Reduce power
cost during
slow times
Lower energy
usage during
the peak
Choice
air or water
cooled racks
The Solution: NeXtScale + IBM Innovation
 Essentials only design allows more servers to fit into
the data center
 Designed to consume less power and to lower
energy costs at peak and at idle
 Smart power management can drive down power
needs when systems are at idle
 Choice of air and water cooled servers in either IBM
racks or existing racks
 40% energy efficiency for water cooled solutions
28
2X MORE
Servers per
floor tile
Our Rack
OR
yours
40% More
Energy
Efficient with
Water Cool
Lowest
Operational
costs with
Water Cool
Flexible: How Do You Want Your IT to Arrive?
 NeXtScale can ship fully configured, ready to power on




Fully racked and cabled
Labeling with user supplied naming
Pre programmed IMMs and addresses
Burn in testing before shipment at no added cost
 Prefer to receive systems in boxes – no problem
$
€ £
¥
Customer Benefits
IBM Intelligent Cluster
75%
Faster Time from arrival to
production readiness
29
Number of part numbers needed
for the entire solution support – no
matter the brand of component
1
SAVE
105 lbs of cardboard1
54.6 ft3 of styrofoam
288 linear feet of wood
21,730 less paper inserts
1Per
Rack
Flexible: Confidence it is high quality and functioning upon arrival
 Comprehensive list of interoperability-proven components for
building out solutions






IBM servers
IBM switching, 3rd party switching
IBM storage, 3rd party storage
IBM software, 3rd party software
Countless cables, cards, and add ins
Best recipe approach yields confident interoperability
 Each rack is built from over 9000 individual parts
 Manufacturing LINPACK test provides lengthy burn-in
on all parts in the solution
 Confidence the parts are installed and functioning properly
 Any failing parts are replaced prior to shipment
 Reduces early life part fall out for our clients
 Consistent performance and quality are confirmed before shipment
30
Is this
one
rack
OR
Is it
>9,000
parts?
It’s both
Flexible: Global Services & Support
IBM is a recognized leader in services & support
Speed + Quality
 Prevent downtime with proactive, first-rate service
 Resolve outages faster if they do occur to protect your brand
 Optimize IT and end-user productivity—and revenue—to
enhance business results
 Protect your brand reputation and keep your customer base
 Simply support to save time, resources, and cost
57
call centers
worldwide with
regional and
localized
language support
23,000
585
IT support
specialists
worldwide who
know technology
parts centers with
13 million IBM
and non-IBM
parts
94%
first call hardware success
rate
A combined total of
6.8M
hardware and software
service requests
Rated
#1
in Technical Support
Parts are delivered within 4
hours for
99%
of US customers
114
75%
hardware and software
development laboratories
of software calls resolved by
first point of contact
Lenovo’s Service Commitment
“After the deal closes, IBM will continue to provide maintenance delivery on Lenovo’s behalf for an extended period pursuant
to the terms of a five-year maintenance service agreement with IBM. Customers who originated contracts with IBM should
not see a change in their maintenance support for the duration of the customer’s contract.”
Source: http://shop.lenovo.com/us/en/news/ibm-server
31
Flexible: NeXtScale Mini-SAS Storage Expansion
Natively expand beyond the node with the onboard mini-SAS port


Simply choose available storage controllers
Connect the nx360 node to a JBOD or storage controller of your choice.
2
3
4
V3700 JBOD
1
V3700
+
V7000
nx360 M5
mini-SAS port
Ideal for dense storage
requirements
32
=
+
RAID controller
mini-SAS cable
DCS3700 JBOD
Click name once for solution to appear
Click 2nd time to make disappear and then to select different choice
Dense Analytics
Object Storage
NFS
Hadoop
Virtualizated
Storage
Low-Cost
Object
Storage
LowSecure
Cost
Block
Storage
Analytics
Encryption
Hadoop
HPC
Compression
Block Storage
Flexible: Dense Storage Customer
 NeXtScale Chassis and nodes
– 24 x 2 Socket E5-2600 v3 nodes per chassis
– Dual port 10G Ethernet
– 2x 1G Management Port
– SAS HBA w/6Gb External Connector
 Storage JBODs
– 60 Hot Swap Drives in 4U
– 6 JBODs/Rack
– 4G SAS NL Disks
– Pure JBOD, No Zoning
 Networking
– 1 x 64 Port 10G Ethernet (optionally 2 switches for redundancy)
• Uplinks Req’d
– 2 x 48 Port 1G Ethernet Switches
• Management (1x Dedicated + 1x Shared port)
• Connects to Nodes, JBODs, Chassis FPCs, and PDUs
1.44 Petabytes of Raw storage per rack!
33
Flexible: Power efficiency designed into HW, SW and management
Efficient Hardware
 80 Plus Platinum power supplies at over
94% efficiency – saves 3-10%
 Extreme efficiency voltage regulation
– saves 2-3%
 Larger, more efficient heat sinks require
less air – saves 1-2%
 Smart sharing of fans and power supplies
reduce power consumption – saves 2-3%
 Underutilized power supplies can be
placed into a low power standby mode.
 Energy efficient turbo mode
 Less parts = less power
 Energy Star Version 2(1)
34
Control beyond the server
Powerful Energy Management
 Choice of air or water cooling
 No fans or auxiliary cooling required for
water cooled version, saving power cost
 Pre-set operating modes - tune for
efficiency, performance, or minimum
power
 Chassis level power metering and control
 Power optimally designed for 1-phase or
3-phase power feeds
 Optional intelligent and highly efficient
PDUs for monitoring and control
 Powerful sleep state(2) control reduces
power and latency
 xCAT APIs allow for embedding HW control
into management applications
 LSF Energy Aware features allows for
energy tuning on the fly
 Platform software can target low-bin CPU
applications to lower power on CPUs in
mixed environments
 Platform Cluster Manager Adv. Edition can
completely shut off nodes that are not in use
 Open Source monitoring tool friendly allows
of utilization reporting
 Autonomous power management for various
subsystems within each node.
(1) Pending announcement of product
(2) On select IBM configurations
Flexible: Energy Aware Scheduling
Optimize Power Consumption with Platform LSF®
On Idle Nodes
On Active Nodes
 Policy Driven Power Saving
 Power Saving Aware Scheduling
─ Suspend the node to the S3 state
(saves~60W)**
─ Idle for a configurable period of time.
─ Policy windows (i.e., 10:00 PM – 7 AM)
─ Site customizable to use other
suspension methods
 Power Saving Aware Scheduling
─ Schedule jobs to use idle nodes first
(Power saved nodes as last resort)
─ Aware of job request and wake up nodes
precisely on demand
─ Safe period before running job on
resumed nodes
─ Ability to set the node/core frequency for
a specific job / application / user
─ Set thresholds based on environmental
factors – such as node temperature
 Energy Saving Policies**
 Minimize energy to solution
 Minimize time to solution
 by intelligently controlling CPU
frequencies
 Collect the power usage for an
application (AC and DC)**
 Make Intelligent Predictions
 Manual management
─ Suspend, resume, history
35
** Only available on IBM NeXtScale and iDataPlex
─ Performance, power consumption and
runtime of applications at different
frequencies**
NeXtScale – Key Messages
SIMPLE
The back is now the
front—simplify
management and
deployment
Get in production faster
with Intelligent Cluster
Optimized shared
infrastructure without
compromising
performance
“Essentials only” design
36
Simple: Making management and deployment simple
Competition
NeXtScale
You don’t have to be in the dark
Stay in front of the rack
and see things better
It’s so dark in here
Reduce service errors when
maintaining systems
Which cable do I pull?
Know what cable you are pulling
Quick access to servers
Add/remove/power servers without
touching the power cables
Tool-less design speeds
problem resolution
65-80ºF
Remove power in rear
(from the right system) before
pulling system out from the front
>100 ºF!!!
Which aisle do you
want to work in?
37
Cold aisle
Hot aisle
Simple: In Production Faster - ENI Client Example
Intelligent Cluster significantly reduces setup time
resulting in getting clients’ into production by at least
75%1 faster than non-Intelligent Cluster offerings
 Solution Overview
–
–
–
–
–
1500 server nodes in 36 racks
3000 NVIDIA K20x GPGPU accelerators
IFDR InfiniBand, Ethernet, GPFS
Enclosed in cold-aisle containment cage
#11 on June 2014 Top500 list
 Delivered fully integrated to the client’s center
– HW inside delivery and installation included at no additional cost
– TOP500 Linpack run successfully 10 days after first rack arrived
– All servers pre-configured with customer VPD in manufacturing
 Entire solution delivered, supported as 1 part number
– Full interoperability test and support
– One number to call for support regardless of component
38
1
comparison on install time for complete roll your own installation versus IBM Intelligent Cluster delivery
Included
Interoperability Tested
Yes
HPL (Linpack) Stressed /
Benchmarked in Mfg.
Yes
IBM HW Break Fix Support
Yes - All components
Inside Delivery, HW Install.
Yes
Bare Metal Customization
Available at no charge
Yes
Result
Production Ready
Simple: Save Time and Resources with Intelligent Cluster
IBM fully integrates and tests your cluster, saving time and reducing complexity
Step
Intelligent Cluster
w/o Intelligent Cluster
15min
40min
Install Rail Kits
0
30min
Install Servers
0
2hr
Cable Ethernet
0
2hr
Cable IB
0
2hr
Rack to Rack
1hr
1hr
Power on Test
0
10min
Program IMMs
0
15min
Program TOR
0
10min
Collect MAC & VPD
0
30
15min
15min
1hr
0
2-1/2 hr
9-1/2 hr
Move Servers to DC
Provision
HW Verification
TOTAL TIME:
39
SAVE ~7 hrs
per rack!
Simple: The Advantages of Shared Infrastructure without the Contention
 Shared Power Supplies and Fans
 90% reduction in fans1
 75% reduction in PSUs1
 Each system acts as an independent 1U/2U server
 Individually managed
 Individually serviced and swappable servers
 Use any Top of Rack (TOR) Switch
 No contention for resources with in the chassis
 Direct access to network and storage resources
 No management contention
Light weight/Low Cost chassis
Simple mid plane with no active components
No in chassis IO switching
No left or right specific nodes
High efficiency PSU and fans
No unique chassis management required
1 typical
40
1U server with 8 fans and 2 PSUs
Simple: “Essentials Only” Design
 Only includes the essentials
 Two production 1Gb Intel NICs; dedicated or shared 1Gb
for management
 Standard PCI card support
 Flexible ML2/Mezzanine for IO expansion
 Power, Basic LightPath, and KVM crash cart access
 Simple ‘pull out’ asset tag for naming or RFID
 Not painted black  just left as silver
 Clean, simple, and low cost
 Blade like weight/size – rack server like individuality/control
 NeXtScale delivers basic, performance centric IT
NeXtScale nx360 M5
41
“I can’t see my servers, don’t care what color they are”
“I don’t use 24 DIMMs why pay for a system to hold them?”
“I only need RAID mirror for OS don’t want extra HDD bays”
“I only need a few basic PCI/IO options”
2014 LENOVO INTERNAL. ALL RIGHTS RESERVED.
3 Ways to Cool your Datacenter
Air Cooled
Rear Door Heat Exchangers
Standard air flow with internal fans
Good for lower kW densities
Less energy efficient
Consumes more power – higher
OPEX
 Typically used with raised floors
which adds cost and limits airflow out
of tiles
 Unpredictable cooling. Hot spots in
one area, freezing in another
 Air cool, supplemented with RDHX
door on rack
 Uses chilled water
 Works with all IBM servers and
options
 Rack becomes thermally transparent
to data center
 Enables extremely tight rack
placement




43
Direct Water Cooled







100% water cooled
No fans or moving parts in system
Most energy efficient datacenter
Most power efficient servers
Lowest operational cost
Quieter due to no fans
Run processors in turbo mode for max
performance
 Warm water cooling means no expensive
chillers required
 Good for geographies with high electricity
cost
NeXtScale System with Water Cool Technology
Achieve extreme scale with ultimate efficiency
•
•
40%
More Energy Efficient
Data Center1
10%
More Power Efficient Server2
85%
Heat Recovery by Water
44
Requires no auxiliary cooling³
No chillers required due to warm water cooling
(up to 45 C)3
•
•
•
No Fans required for compute elements
Small power supply fans only
Lower operational costs, and quieter
•
•
Re-use warm water to heat other facilities
Run processors at higher frequencies (Turbo mode)
1. Based on comparisons between air-cooled IBM iDataPlex M4 server and water-cooled iDataPlex M4 servers
2. LRZ, a client in Germany DC numbers
3. Geography dependent
Water Cool Technology
NeXtScale nx360 M5 WCT Dual Node Compute Tray
System infrastructure
Simple architecture
Water Cool Compute Node
 2 compute nodes per full
wide 1U tray
 Water circulated through
cooling tubes for component
level cooling
 Dual socket Intel E5-2600 v3
processors (up to 18C)
 16x DIMM slots (DDR4,
2133MHz))
 InfiniBand FDR support via
choice of ConnectX-3 ML2
adapter or Connect IB PCIe
adapter
 Onboard GbE NICs
nx360 M5 WCT Compute Tray (2 nodes)
Dual-port ML2
(IB/Ethernet)
CPU with liquid 16x DIMM
slots
cooled heatsink
x16 ML2
slot
1 GbE
ports
Power,
LEDs
PCI slot for
Infiniband
Cooling
tubes
Water
Outlet
PCI slot for
Connect IB
x16 ML2
slot
PCI slot for
Connect IB
45
Labeling
tag
Water
Inlet
nx360 M5 WCT Compute Tray (2 nodes) – Front Panel
•
•
•
•
2 compute nodes per tray  6 trays per 6U chassis (12 servers)
Dual x16 ML2 slot supports InfiniBand FDR (optional)
PCIe adapter support for Connect IB or Intel QDR (optional)
GbE dedicated and GbE shared NIC
Front Panel*
Node #1
Dual-port ML2
(IB/Ethernet)
KVM
1GbE/ Shared
Connector
NIC
LEDs
(Power, Location,
Log, Error)
PCI slot for Dual-port ML2
Infiniband
(IB/Ethernet)
Node #2
KVM 1GbE/ Shared
NIC
Connector
Power,
LEDs
Rear View
46
Water Inlet
* Configuration dependent. Configuration includes Ml2 and PCI adapters.
Water Outlet
PCI slot for
Infiniband
NeXtScale n1200 WCT Enclosure – Water Cool Chassis
System infrastructure
Simple architecture
Water Cool Chassis
IBM NeXtScale n1200 WCT Enclosure
 6U Chassis, 6 bays
 Each bay houses a full wide,
2-node tray (12 nodes per 6U
chassis)
 Up to 6x 900W or 1300W
power supplies N+N or N+1
configurations
 No fans except PSUs
 Fan and Power Controller
 Drip sensor and error LEDs
for detecting water leaks
 No built in networking
 No chassis management
required
Front View
shown with 12 compute nodes installed (6 trays)
3x power
supplies
47
Rear fillers/EMC
Fan and Power
shields
Controller
Rear View
3x power
supplies
Rear fillers/EMC
shields
nx360 M5 WCT Manifold Assembly
 Manifolds deliver water directly to and from each of the compute nodes within the chassis via water
inlet and outlet quick connects.
 Modular design enables multiple configurations via sub-assembly building block per chassis drop.
 6 models: 1, 2, 3, 4, 5 or 6 chassis drops
n1200 WCT Chassis
48
Single Manifold Drop
(1 per chassis)
6 drop
Manifold
NeXtScale M5 Product Timeline
A Lot More Coming
Announce: Sept. 8, 2014
Shipments Begin: Sept. 30, 2014
GA: Nov. 19, 2014
Currently Shipping
n1200 Chassis
nx360 M4
Storage NeX
PCIe NeX
49
• nx360 M5 compute node (air cool
and water cool)
• Supports existing 6U chassis (air)
• New 6U chassis (water)
• 14 processor SKUs
• PCI NeX support (GPU/Phi)
Refresh 1
GA: Jan 2015
• 8 additional processors
• NVIDIA K80 support
• Storage NeX 12Gb Support
• Mini-SAS port
• -48VDC power supply
More Storage
More Accelerators
More IO Options
Next Gen Processors
4 GPU / 4 HS drive tray
EDR support
Broader SW Ecosystem
OS Support
2014 LENOVO INTERNAL. ALL RIGHTS RESERVED.
Application Ready Solutions simplifies HPC, speeds delivery
Developed in partnership with leading ISVs, based on reference architectures
Accelerators
Grid
IBM Platform™ LSF®
Workload management platform, Intelligent
policy-driven scheduling features
Networking
IBM Platform Symphony
Run compute and data intensive distributed
applications on a scalable, shared grid
Storage
Applications
IBM Platform HPC
Compute
nodes
IBM NeXtScale
System™
51
Out-of-the-box features reduce
complexity of HPC environment
IBM Intelligent Cluster™
IBM Platform Cluster Manager
Quickly and simply provision, run,
manage, monitor HPC clusters
IBM Application Ready Solution for CLC bio
Accelerate time to results for your genomics research
Easy to use, performance optimized solution architected
for CLC bio Genomic Server and Workbench software
 Client support for increased demand for genomics sequencing
 Drive down cost, speed up the assembly, mapping and analysis
involved in the sequencing process with integrated solution
 Modular solution approach enables easy scalability as workloads
increase
“It has been a pleasure to work with IBM, optimizing
our enterprise software running on the IBM Application
Ready Solution for CLC bio platform. We are proud to
offer this pre-configured, scalable high-performance
infrastructure with integrated GPFS to all our clients
with demanding computational workloads.”
- Mikael Flensborg, Director of Global Partner Relations
CLC bio, A QIAGEN® Company
 Learn more: solution brief, reference architecture
Use Case
15 human genome (37x) or 120
human exome (150x) per week
30 human genome (37x) or 240
human exome (150x) per week
60 human genome (37x) or 480
human exome (150x)/wk
Single
Dual
Dual
3
6
12
Compute - Disk per Node
2 TB
2 TB
2 TB
Compute - Memory per Node
128
128
128
Storwize V7000 Unified (TB)
20
55
90
10 Gigabit switch / adapters
yes
yes
yes
Workload size
Head Node - x3550M4
Compute – # x240 or nx360 nodes
52
Next Generation Sequencing
Management Software
52
IBM Platform HPC, Elastic Storage (based on GPFS)
IBM Application Ready Solution for Algorithmics
Optimized high-performance solution for risk analytics
Easy to use, performance optimized solution architected for
IBM Algorithmics Algo One solution
 Supported software:
• Algo Credit Manager

Algo Scenario Engine
• Algo Risk Application

Algo RiskWatch
• Algo Aggregation Services (Fanfare)
Read the
analyst paper
 Easy to deploy integrated, high-performance cluster environment
 Based on “best practices” reference architecture, lowers risk
“Many firms will benefit from the Application Ready
Solution for Algorithmics to accelerate risk
analytics and improve insight . This solution helps
lower costs and mitigate IT risk by delivering an
integrated infrastructure optimized for active risk
management”.
— Dr. Srini Chari, Managing Partner,
Cabot Partners
 User friendly portal provides easy access , control of resources
Use Case
Workload
53
53
Algo One Risk Analysis
Small
Medium
Large
Management Node for Algo / PCM
1
1 (can be shared)
1 (can be shared)
Management Node for Symphony
1
2 (shared)
2 (shared)
Compute Server – x240 or nx360
6
14
36 (or more)
Compute Cores (total)
96
224
574
Compute – Total memory (GB)
768
1792
4608
Elastic Storage (GPFS) Servers
None
2
2
Storage – V3700 SAS; shared stg
GPFS 31 TB
GPFS, 62 TB
GPFS, 124 TB
FDR IB switch / adapters Network
10 GbE
10 GbE
10 GbE
Software
IBM Platform Cluster Manager (PCM), IBM Platform Symphony, Elastic Storage, DB2 enterprise (opt.)
IBM Application Ready Solution for ANSYS
Simplified, high-performance simulation environment
Computational Fluid Dynamics (Fluent,
CFX)
•1 job 15+M cells
using all 120C
•6 jobs 2.5+M cells
each
Workload size
Head Node – nx360 M4
Quantity
Compute
– nx360 M4
GPU Node*
– PCI NeX
Visualization*
– PCI NeX
File System*
Processor
Memory
Disk
GPU
Memory
Disk
GPU
* Optional
54
•4 large jobs 2–5
MDOF each
•10 large jobs 2–5
MDOF each
• 15 large jobs 1020 MDOF each
Single
Single
Single
Single
Single
Single
6
10
42
6
10
42
E5-2680 v2 10C
E5-2680 v2 10C
E5-2680 v2 10C
E5-2670 v2 8C
E5-2670 v2 8C
E5-2670 v2 8C
128
Diskless
128
Diskless
128
Diskless
-
NVIDIA GRID K2
K2
256
2 x 800 GB SSD
2 x NVIDIA K40
256
2 x 800 GB SSD
K2
256
2x800 GB SSD
2 x K40
256
2 x 800 GB SSD
K2
256
2x800 GB SSD
2 x K40
256
2 x 800 GB SSD
2 x K2
256
256
256
256
256
yes
yes
yes
yes
yes
yes
yes
yes
yes
yes
yes
yes
Platform HPC or Platform LSF; Elastic Storage (GPFS File System (opt))
yes
yes
yes
Memory
DS3524
Gigabit
Network
FDR IB
Management Software
•1 job 25+M cells each
•1 job 200+M cells
using all 200C
using all 840C
•10 jobs 2.5+M cells each •20 jobs 10+M cells
each
Structural Mechanical (Ansys)
yes
yes
no
Configuration shown is based on IBM NeXtScale System™. IBM Flex System™ x240 with E5-2600 V2 Compute nodes is also available. Both systems are available to order as
IBM Intelligent Cluster™. To Learn more: read the solution brief and reference architecture.
Call to Action
1. Lead with NeXtScale on all x86 Technical Computing (HPC) opportunities
2. Look for NeXtScale opportunities in Cloud Computing, Datacenter
Infrastructure, Data Analytics, and Virtual Desktop
3. Evaluate customer’s energy efficiency requirements to assess if Water
Cooling is appropriate for them
4. Utilize customer seed systems
5. Learn more about NeXtScale from the links on the Resources page
55
IBM NeXtScale M5 – Resources / Tools
Product Resources:
Sales Tools:






 Sales Kit IBM PW
Announcement Page Link
Announcement Webcast (replay) Link
Product Page Link
Data Sheet Link
Product Guide Link
Virtual Tour
 Air Cool Link
 Water Cool Link




Product Animation Link
Infographic IBM
Blog Link
Benchmarks:
 SPEC_CPU2006 - NeXtScale nx360 M5 with E5-2667 v3 Link
 SPEC_CPU2006 - NeXtScale nx360 M5 with E5-2699 v3 Link
 Seller Presentation IBM PW
 Client Presentation IBM PW
 Sales Education IBM PW
 Technical Education IBM PW
 Seller Training Webcast: NeXtScale M5, GSS Link
 VITO Letters IBM PW
 Quick Proposal Resource Kit Link
Client Videos:
 Caris Life Sciences Link
 Hartree Centre Link
 Univ of Notre Dame Link
Analyst Papers
 Cabot Partners: 3D Virtual Desktops by Perform Link
 Intersect360: Hyperscale Computing- No Frills Clusters at Scale Link
56