Extreme Computing standard slide deck

advertisement
1
©Bull, 2010
Bull Extreme Computing
Table of Contents
BULL – Company introduction
bullx – European HPC offer
BULL/Academic community co-operation
+ references
ISC’10 – News from Top European HPC Event
Discussion / Forum
2
©Bull, 2010
Bull Extreme Computing
Bull Strategy Fundamentals
Jaroslav Vojtěch, BULL HPC sales representative
3
©Bull, 2010
Bull Extreme Computing
Bull Group
REVENUE BREAKDOWN
A growing and profitable company
A solid customer base
- Public sector, Europe
Bull, Architect of an Open World™
- Our motto, our heritage, our culture
Group commitment to become a
leading player in Extreme
Computing in Europe
- The largest HPC R&D effort in Europe
- 500 Extreme Computing experts = the
largest pool in Europe
4
©Bull, 2010
Bull Extreme Computing
Business segments and key offerings
Global offers and thirdparty products
€77 m
Others
Services
&
Solutions
€483 m
-
-
Mainframes
Unix AIX systems
x86 systems
(Windows/Linux)
Supercomputers
Solution integration
Consultancy,
optimization
Extreme Computing
& Storage
Systems Integration
Open Source
Business Intelligence
Security solutions
Infrastructure integration
©Bull, 2010
Enterprise
Hardware
& System
Solutions €358 m -
Telco
e-govt.
Outsourcing
5
-
Outsourcing
Operations
Maintenance
& Product-Related
Services
€192 m
Bull Extreme Computing
-
Green IT
Data Center relocation
Disaster recovery
Third-party product maintenance
Extreme Computing offer
Boris Mittelmann, HPC Conslutant
Bull positioning in Extreme Computing
Presence in education, government
and industrial markets
From mid-size solutions to high end
On the front line for innovation: large
hybrid system for GENCI
Prepared to deliver petaflops-scale
systems starting in 2010
The European Extreme Computing
provider
7
©Bull, 2010
Bull Extreme Computing
Addressing the HPC market
2007
• Divisional
• Departmental
• Workstation
Super
Computers
HPC market evolution
(EMEA)
Targeted
HPC market
2012
• Super Computers
• Divisional
• Departmental
Workstation
3 B$
5 B$
• PetaFlops-class HPC
• Expand into manufacturing, oil and gas
• Open Framework: OS, NFS, DB and Services
• Tera-100 CEA project
• Hybrid architectures
• Leverage Intel Xeon roadmap, time to market
• Manage and deliver complex projects
HPC Grand Challenges
Our ambition:
be the European leader in Extreme Computing
8
©Bull, 2010
Bull Extreme Computing
Target markets for Bull in Extreme Computing
Government
-
Defense
Economic Intelligence
National Research Centers
Weather prediction
Climate research, modeling and change
Ocean circulation
Oil & Gas
-
-
Seismic: Imaging, 3D interpretation, Prestack data
analysis
Reservoir modeling & simulation
Geophysics sites Data Center
Refinery Data Center
Automotive & Aerospace
-
CAE: Fluid dynamics, Crash simulation
EDA: Mechatronics, Simulation & Verification
Finance
9
©Bull, 2010
Derivatives Pricing
Risk Analysis
Portfolio Optimization
Bull Extreme Computing
The most complete HPC value chain in Europe
Offer
Services
Bull organizations
Customers
Security
Encryption
Access Control
Operations and
management
(SLA)
G e r m a n y
Hosting /
Outsourcing
Services
Design and
architecture
deployment
System design
Bull
Systems
R&D
+500 specialists in Europe
10
©Bull, 2010
Bull Extreme Computing
Innovation through partnerships
Bull’s experts are preparing the intensive computing technologies
of tomorrow by having an active role in many European
cooperative research projects
A strong commitment to many projects, such as:
-
-
Infrastructure projects: FAME2, CARRIOCAS, POPS, SCOS
Application projects: NUMASIS (seismic), TELEDOS (health), ParMA
(manufacturing, embedded), POPS (pharmaceutical, automotive,
financial applications, multi-physical simulations…), HiPIP (image
processing), OPSIM (Numerical optimization and robust design
techniques), CSDL (complex systems design), EXPAMTION (CAD
Mechatronics), CILOE (CAD Electronics), APAS-IPK (Life Sciences).
Tools: application parallelizing, debugging and optimizing (PARA,
POPS, ParMA)
Hybrid systems: OpenGPU
Hundreds of Bull experts are dedicated to
cooperative projects related to HPC innovation
11
©Bull, 2010
Bull Extreme Computing
Major Extreme Computing trends and issues
Networking
• Prevalence of Open
architectures (Ethernet,
InfiniBand)
Accelerators
Multi-core
• Incredible
performance per watt
• Turbo charge
performance… by a
factor of 1 to 100…
• Multi core is here to
stay and multiply
• Programming for
multi core is THE
HPC challenge
Storage
• Vital component
• Integrated through
parallel file system
12
©Bull, 2010
Bull Extreme Computing
Bull’s vision
Innovation
-
bullx blade system
bullx accelerator blade
bullx supernode SMP servers
bullx cluster suite peta-scalability
mobull containers
Research with European Institutes
Performance/Watt
---
Accelerators
Green data center design
Mid-size to high end
Accelerators
Mid-size
to High-end
Cost efficiency
Cost
efficiency an issue
Optimization
13
©Bull, 2010
Integration
Extreme
Computing
Bull Extreme Computing
-
Green data center design
Off the shelf components
bullx blade system
Mid-size systems
Cost efficiency
The bullx range
Designed with Extreme Computing in mind
14
©Bull, 2010
Bull Extreme Computing
hardware for peta-scalability
Application environment
System environment
Installation/configuration
Monitoring/control/diagnostics
Execution environment
Job scheduling
Lustre
config
Ksis
Nscontrol
// cmds
Nagios
Ganglia
Resource
management
Lustre
NFSv4
NFSv3
Development
Libraries
& tools
MPIBull2
Interconnect access layer (OFED,…)
Cluster database
cluster suite
File systems
Linux OS
Administration network
HPC interconnect
Linux kernel
Water
cooling
Hardware
XPF platforms
GigE network switches
InfiniBand/GigE interconnects
Disk arrays
bullx supernodes
bullx blade system
bullx rack-mounted
servers
Storage
Architecture
ACCELERATORS
15
©Bull, 2010
Bull Extreme Computing
The bullx blade system
Dense and open
Best HPC server
product or technology
Top 5 new products or
technologies to watch
16
©Bull, 2010
Bull Extreme Computing
bullx blade system
Dense and open
No compromise on:
Performance
Latest Xeon processors from Intel (Westmere EP)
Memory-dense
Fast I/Os
Fast interconnect: full non blocking InfiniBand QDR
Accelerated blades
Density
12% more compute power per rack than the densest
equivalent competitor solution
Up to 1296 cores in a standard 42U rack
Up to 15,2 Tflops of compute power per rack (with CPUs)
Efficiency
All the energy efficiency of Westmere EP
+ exclusive ultra capacitor
Advanced reliability (redundant power and fans, diskless
option)
Water-cooled cabinet available
Openness
Based on industry standards and open source
technologies
Runs all standard software
17
©Bull, 2010
Bull Extreme Computing
bullx supernode
An expandable SMP node for memory-hungry applications
SMP of up to 16
sockets based on Bulldesigned BCS:
RAS features:
• Self-healing of the QPI
and XQPI
• Hot swap disk, fans,
power supplies
• Intel Xeon Nehalem
EX processors
• Shared memory of up
to 1TB (2TB with 16GB
DIMMS)
Available in 2 formats:
Green features:
• High-density 1.5U
compute node
• High I/O connectivity
node
18
©Bull, 2010
• Ultra Capacitor
• Processor power
management features
Bull Extreme Computing
bullx rack-mounted systems
A large choice of options
19
©Bull, 2010
Enhanced connectivity
and storage
 2U
 Xeon 5600
 2-Socket
 18 DIMMs
 2 PCI-Express x16 Gen2
 Up to 8 SATA2 or SAS
HDD
 Redundant 80 PLUS
Gold power supply
 Hot-swap fans
Bull Extreme Computing
VISUALIZATION
2 nodes in 1U
for unprecedented density
NEW: more memory
 Xeon 5600
 2x 2-Socket
 2x 12 DIMMs
 QPI up to 6.4 GT/s
 2x 1 PCI-Express x16 Gen2
 InfiniBand DDR/QDR
embedded (optional)
 2x 2 SATA2 hot-swap HDD
 80 PLUS Gold PSU
R425 E2
R423 E2
SERVICE NODE
COMPUTE NODE
R422 E2
Supports latest graphics
& accelerator cards
 4U or tower
 2-Socket
 Xeon 5600
 18 DIMMs
 2 PCI-Express x16 Gen2
 Up to 8 SATA2 or SAS
HDD
 Powerful power supply
 Hot-swap Fans
GPU accelerators for bullx
NVIDIA® Tesla™ computing systems: teraflops many-core processors
that provide outstanding energy efficient parallel computing power
NVIDIA Tesla C1060
NVIDIA Tesla S1070
To turn an R425 E2 server into a
supercomputer
 Dual slot wide card
 Tesla T10P chip
 240 cores
 Performance: close to 1 Tflops (32 bit FP)
 Connects to PCIe x16 Gen2
20
©Bull, 2010
The ideal booster for R422 E2 or
S6030 -based clusters
 1U drawer
 4 x Tesla T10P chips
 960 cores
 Performance: 4 Tflops (32 bit FP)
 Connects to 2 PCIe x16 Gen2
Bull Extreme Computing
Bull Storage Systems for HPC
StoreWay
Optima 1500
SAS/SATA
3 to 144 HDDs
Up to 12 host ports
2U drawers
StoreWay
EMC CX4
FC/SATA
Up to 480 HDDs
Up to 16 host ports
3U drawers
DataDirect Networks
S2A 9900
(consult us)
SAS/SATA
Up to 1200
HDDs
8 host ports
4+ 2/3/4 U
drawers
cluster
suite
21
©Bull, 2010
Bull Extreme Computing
Bull Cool Cabinet Door
Bull’s contribution to reducing energy consumption
Enables world densest Extreme Computing
solution !
-
28kW/m² (40kW on 1.44m²)
29 ‘U’/m² (42U + 6PDUs on 1.44m²)
77% energy saving compared to air
conditioning!
-
Water thermal density much more efficient
than air
600W instead of 2.6kW to extract 40kW
Priced well below all competitors
-
22
12K€ for fully redundant rack
same price as Schroff for twice the
performance (20kW)
half price from HP for better performance
(35kW) and better density
©Bull, 2010
Bull Extreme Computing
Extreme Computing solutions
Built from standard components, optimized by Bull’s innovation
Hardware platforms
Services
Software environments
cluster
suite
Interconnect
Storage systems
23
©Bull, 2010
StoreWay
Bull Extreme Computing
 Design
 Architecture
 Project
Management
 Optimisation
Choice of Operating Systems
cluster
suite
Microsoft’s
Windows HPC Server 2008
Standard Linux distribution +
bullx cluster suite
A fully integrated and optimized HPC
cluster software environment
A robust and efficient solution
delivering global cluster management…
… and a comprehensive development
environment
24
©Bull, 2010
Bull Extreme Computing
An easy to deploy, cost-effective
solution with enhanced
productivity and scalable
performance
cluster suite
Mainly based on Open Source components
Engineered by Bull to deliver RAS features Reliability, Availability, Serviceability
Cluster DB benefits
Master complexity
+100,000 nodes
Make management easier,
monitoring accurate and
maintenance quicker
Improve overall utilization rate
and application performances
bullx cluster suite advantages
- Unlimited Scalability
- Automated configuration
- Fast installation and updates
25
©Bull, 2010
– Boot management and boot time
– Hybrid systems Management
– Health monitoring & preventive maintenance
Bull Extreme Computing
Software Partners
Batch management
-
Platform Computing LSF
Altair PBS Pro
Development, debugging,
optimisation
-
TotalView
Allinea DDT
Intel Software
Parallel File System
-
26
©Bull, 2010
Lustre
Bull Extreme Computing
Industrialized project management
Expertise & project management
Solution
design
Computer
room design
1
─ Servers
─ Interconnect
─ Storage
─ Software
─ Racks
─ Cooling
─ Hot spots
─ Air flow (CFD)
Trainings
Workshops
27
©Bull, 2010
3
─ Rack integration
─ Cabling
─ Solution staging
Start
production
5
─ Administrators
─ Users
2
Factory
Integration &
Staging
6
─ On-site engineers
─ Support from Bull
HPC expertise centre
Support
and
Assistance
─ Installation
─ Connection
─ Acceptance
Partnership
7
─ Call centre 7 days/week
─ On-site intervention
─ Software support
─ HPC expert support
Bull Extreme Computing
On site
Installation &
acceptance
4
8
─ Joint research
projects
Factory integration and staging
Two integration and staging sites in Europe
To guarantee delivery of « turn key » systems
5 300 m2 technical platforms available for assembly,
integration, tests and staging
12 000 m2 logistics platforms
60 test stations
150 technicians
150 logistics employees
Certified ISO 9001, ISO 14001 and OHSAS 18000
28
©Bull, 2010
Bull Extreme Computing
mobull, the container solution from Bull
The plug & boot data center
Up to 227.8 Teraflops per
container
Up and running within 8
weeks
Innovative cooling system
Can be installed indoor or
outdoor
Available in Regular or
Large sizes
29
©Bull, 2010
Bull Extreme Computing
mobull, the container solution from Bull
MOBILE
DENSE
Transportable
container
FLEXIBLE
Modular
Buy / lease
Fast deployment
550 kW
227,8 Tflops
18 PB
mobull
Hosts any 19’’ equipment servers, storage, network
MADE TO
MEASURE
Can host bullx servers
Complete turnkey
solution
PLUG & BOOT
30
©Bull, 2010
Bull Extreme Computing
POWERFUL
A worldwide presence
31
©Bull, 2010
Bull Extreme Computing
Worldwide references in a variety of sectors
Educ/Research
Industry
Aerospace/Defence
… and many others
32
©Bull, 2010
Bull Extreme Computing
Other
bullx design honoured by OEM of CRAY
33
©Bull, 2010
Bull Extreme Computing
bullx design honoured by OEM of CRAY
34
©Bull, 2010
Bull Extreme Computing
Perspectives
HPC Roadmap for 2009-2011
09Q2
09Q3
09Q4
10Q1
Nehalem EP
Processor
10Q2
10Q3
10Q4
11Q1
Westmere EP
Servers
Twin
R425-E2 (W/2xNH-EP)
R423-E3
R425-E3
R422-E2(W/2x2 NH-EP-Std/DDR/QDR)
R422-E2 (W/Westmere-EP)
R422-E3
Ultracapa.
2x Eth switch
Interc. IB/QDR
Blade
Blade
GPU
INCA - 18x blades w/2x NH-EP
ESM. 10Gb
INCA - 18x blades w/Westmere-EP
MESCA w/Nehalem EX
4S3U/4SNL
On R422-E1/E2 w/Tesla S1070)
GPU
nVIDIA C1060 - S1070
IB interconnect
Air cooled
20kw
Racks
Storage
Cluster suite
36
QDR 36p
Optima/EMC CX4
XR 5v3.1U1 - ADDON2
©Bull, 2010
INCA w/ SB. EP
8SNL/16S3U
R480-E1 (Dunnington - 24x cores)
SMP
11Q4
Westmere EX
R423-E2 (W/Westmere-EP)
R425-E2 (W/Westmere-EP)
R423-E2 (W/2xNH-EP)
Gfx
11Q3
Sandy Bridge EP
Nehalem EX
std
11Q2
next platform
MESCA w/Westm. EX
On blade/SMP system
nVidia T20 / new generation of accelerators & GPUs
QDR 324p
EDR
direct liquid
Air (20kw) or water-cooled (40kw)
DDN 9900/66xx/LSI-Panasas (TBC)
XR 5v3.1U1 ADDON3
Future storage offer
New generation cluster manager Extreme/Enterprise
Bull Extreme Computing
Tentative dates, subject to change without notice
A constantly innovating Extreme Computing offer
Hybrid clusters
(Xeon + GPUs)
2008
37
©Bull, 2010
Integrated clusters
2009
Bull Extreme Computing
High-end servers
with petaflops performance
2010
Our R&D initiatives address key HPC design points
The HPC market makes
extensive use of two
architecture models
Bull R&D investments
Scale-out
Massive deployment of “thin nodes”,
with low electrical consumption
and limited heat dissipation
Key issues:
Network topology
System Management
Application software
bullx blade system
Scale-up
“Fat nodes” with many processors
and large memories to reduce the
complexity of very large centers
Key issues:
Internal architecture
Memory coherency
bullx super-nodes
Ambitious long-term developments in cooperation with CEA and
System@tic competitiveness cluster, Ter@tec competence centre, and many
organizations from the worlds of industry and education
38
©Bull, 2010
Bull Extreme Computing
Tera 100: designing the 1st European petascale system
Collaboration contract between Bull
and CEA, the French Atomic
Authority
Joint R&D
- High-performance servers
- Cluster software for large-scale
-
systems
System architecture
Application development
Infrastructures for very large Data
Centers
Operational in 2010
39
©Bull, 2010
Tera 100 in 5 figures
100,000
300
TB memory
500
GB/s data throughput
580
m² footprint
5
Bull Extreme Computing
cores
(X86 processors)
MW estimated power
consumption
Product Descriptions – ON REQUEST
BULL IN EDUCATION/RESEARCH - References
University of Münster
Germany's 3rd largest university
and one of the foremost centers
of German intellectual life
Need
More computing power and a high degree of flexibility, to meet
the varied requirements of the different codes to be run
Solution
A new-generation bullx system, installed in 2 phases
Phase 1
-
2 bullx blade chassis containing 36 bullx B500 compute blades
8 bullx R423 E2 service nodes
DataDirect Networks S2A9900 storage system
Ultra fast InfiniBand QDR interconnect
Lustre shared parallel file system
hpc.manage cluster suite (from Bull and s+c) and CentOS Linux
Phase 2 (to be installed in 2010)
-
10 additional bullx blade chassis containing 180 bullx B500 compute
blades equipped with Intel® Xeon® ‘Westmere’
4 future SMP bullx servers, with 32 cores each
27 Tflops peak performance at the end of phase 2
42
©Bull, 2010
Bull Extreme Computing
University of Cologne
One of Germany’s largest
universities, it has been involved
in HPC for over 50 years
Need
More computing power to run new simulations and refine
existing simulations, in such diverse areas as genetics, hightech materials, meteorology, astrophysics, economy
Solution
A new-generation bullx system, installed in 2 phases
Phase 1 (2009)
-
12 bullx blade chassis containing 216 bullx B500 compute blades
12 bullx R423 E2 service nodes
2 DataDirect Networks S2A9900 storage systems
Ultra fast InfiniBand QDR interconnect
Lustre shared parallel file system
bullx cluster suite and Red Hat Enterprise Linux
Bull water-cooled racks for compute racks
Phase 2 (2010)
-
34 additional bullx blade chassis containing 612 bullx B500 compute
blades equipped with Intel® Xeon® ‘Westmere’
4 future SMP bullx servers, with 128 cores each
Performance at the end of phase 2: peak 100 Tflops
26 TB RAM and 500 TB disk storage
43
©Bull, 2010
Bull Extreme Computing
Jülich Research Center
The leading and largest HPC
centre in Germany
A major contributor to
European-wide HPC projects
JuRoPa supercomputer
-
“Jülich Research on Petaflops Architectures“:
accelerating the development of high performance
cluster computing in Europe
200-teraflops general purpose supercomputer
Bull is prime contractor in this project which also
includes Intel, Partec, and Sun
HPC-FF supercomputer
-
100 Teraflops to host applications for the European
Union Fusion community
Bull cluster :
-
1,080 Bull R422 E2 computing nodes
New generation Intel® Xeon series 5500 processors
interconnected via an InfiniBand® QDR network
water-cooled cabinets for maximum density and optimal
energy efficiency
Together, the 2 supercomputers rank #10 in the
TOP500, with 274.8 Tflops (Linpack)
Efficiency: 91.6 %
44
©Bull, 2010
Bull Extreme Computing
GENCI - CEA
A hybrid architecture designed to meet production
and research needs, with a large cluster combining
general purpose servers and specialized servers:
1068 Bull nodes, i.e. 8544 Intel® Xeon® 5500 cores
providing a peak performance of 103 Tflops
48 GPU NVIDIA® nodes, i.e. 46000 cores, providing
an additional theoretical performance of 192 Tflops
25 TB memory
InfiniBand interconnect network
Integrated Bull software environment based on Open
Source components
Common Lustre® file system
Outstanding density with water cooling
295 Tflops peak: first large European hybrid system
“In just two weeks, a common team from Bull and CEA/DAM successfully installed the GENCI's new
supercomputer for the CCRT. Three days after the installation, we are already witnessing the exceptional
effectiveness of the new 8000X5570 cores cluster, which has achieved a 88% efficiency on the Linpack
benchmark, demonstrating the sheer scalability of Bull architecture and the remarkable performance of Intel's
Xeon 5500 processor." (Jean Gonnord, Program Director for Numerical Simulation at CEA/DAM, at the
occasion of the launch of the Intel Xeon 5500 processor in Paris)
45
©Bull, 2010
Bull Extreme Computing
Cardiff University
One of Britain’s leading teaching and
research universities
Need
Provide central HPC service to users in the various academic
schools, who previously had to use small departmental
facilities
Foster the adoption of advanced research computing across a
broad range of disciplines
Find a supplier that will take a partnership approach –
including knowledge transfer
Solution
25 Teraflops peak performance
Over 2,000 Intel® Xeon® Harpertown cores with Infiniband
Interconnect
Over 100 TBs of storage
The partnership between Cardiff and Bull involves the
development of a centre of excellence for high end
computing in the UK, with Cardiff particularly
impressed by Bull’s collaborative spirit.
“The University is delighted to be working in partnership with Bull
on this project that will open up a range of new research frontiers”
said Pr Martyn Guest, Director of Advanced Research Computing
46
©Bull, 2010
Bull Extreme Computing
Commissariat à l’Energie Atomique
France's Atomic Energy Authority (CEA) is
a key player in European research. It
operates in three main areas: energy,
information technology and healthcare,
defence and security.
Need
-
A world-class supercomputer to run the CEA/DAM’s Nuclear
Simulation applications
Solution
-
-
A cluster of 625 Bull NovaScale servers, including 567
compute servers, 56 dedicated I/O servers and 2
administration servers
10,000 Intel® Itanium® 2 cores
30 terabytes of core memory
Quadrics interconnect network
Bull integrated software environment based on Open Source
components
A processing capacity in excess of 52 teraflops
# 1 European supercomputer (# 5 in the world) in the June
2006 TOP500 Supercomputer ranking
“Bull offered the best solution both in terms of global performance and cost of ownership, in other words,
acquisition and operation over a five-year period.” Daniel Verwaerde, CEA, Director of nuclear armaments
“It is essential to understand that what we are asking for is extremely complex. It is not simply a question of
processing, networking or software. It involves ensuring that thousands of elements work effectively together
and integrating them to create a system that faultlessly supports the different tasks it is asked to perform,
whilst also being confident that we are supported by a team of experts.” Jean Gonnord, Program Director for
Numerical Simulation & Computer Sciences at CEA/DAM
47
©Bull, 2010
Bull Extreme Computing
Atomic Weapons Establishment
AWE provides the warheads for the
United Kingdom’s nuclear deterrent. It
is a centre of scientific and
technological excellence.
Need
A substantial increase in production computing resources
for scientific and engineering numerical modeling
The solution must fit within strict environmental constraints
on footprint, power consumption and cooling
Solution
Two identical bullx clusters + a test cluster with a total of:
53 bullx blade chassis containing 944 bullx B500 compute
blades, i.e. 7552 cores
6 bullx R423 E2 management nodes, 8 login nodes
16 bullx R423 E2 I/O and storage nodes
DataDirect Networks S2A9900 storage system
Ultra fast InfiniBand QDR interconnect
Lustre shared parallel file system
bullx cluster suite to ensure total cluster management
Combined peak performance in excess of 75 Tflops
48
©Bull, 2010
Bull Extreme Computing
Petrobras
Leader in the Brazilian petrochemical
sector, and one of the largest integrated
energy companies in the world
Need
A super computing system:
to be installed at Petrobras’ new Data Center, at the University
Campus of Rio de Janeiro
equipped with GPU accelerator technology
dedicated to the development of new subsurface imaging
techniques to support oil exploration and production
Solution
A hybrid architecture coupling 66 general-purpose servers to 66
GPU systems:
66 bullx R422 E2 servers, i.e. 132 compute nodes or 1056 Intel ®
Xeon® 5500 cores providing a peak performance of 12.4 Tflops
66 NVIDIA® Tesla S1070 GPU systems, i.e. 63360 cores,
providing an additional theoretical performance of 246 Tflops
1 bullx R423 E2 service node
Ultra fast InfiniBand QDR interconnect
bullx cluster suite and Red Hat Enterprise Linux
Over 250 Tflops peak
One of the largest supercomputers in Latin America
49
©Bull, 2010
Bull Extreme Computing
ILION Animation Studios
Need
Ilion Animation Studios (Spain)
needed to double their render
farm to produce Planet 51
Solution
Bull provided:
64 Bull R422 E1 servers, i.e. 128
compute nodes
1 Bull R423 E1 head node
GB Ethernet interconnect
running Microsoft Windows
Compute Cluster Server 2003
Released end 2009
50
©Bull, 2010
Bull Extreme Computing
ISC’10 – Top News
25TH ANNIVERSARY – Record breaking attendance
Intel Unveils Plans for HPC Coprocessor
-Tens of GPGPU-like cores with x86 instructions
A 32-core development version of the MIC coprocessor,
codenamed "Knights Ferry," is now shipping to selected
customers. A team at CERN has already migrated one
of its parallel C++ codes.
TOP500 released – China gained 2nd place!!!
TERA100 in production - by BULL / CEA:
-FIRST EUROPEAN PETA-SCALE ARCHITECTURE
-WORLD’S LARGEST INTEL-BASED CLUSTER
-WORLD’S FASTEST FILESYSTEM (500GB/s)
51
©Bull, 2010
Bull Extreme Computing
Questions
&
Answers
52
©Bull, 2010
Bull Extreme Computing
bullx blade system – Block Diagram
18x compute blades
- 2x Westmere-EP sockets
- 12x memory DDR3 DIMMs
- 1x SATA HDD/SSD slot (optional – diskless
an option)
- 1x IB ConnectX/QDR chip
1x InfiniBand Switch Module (ISM) for
cluster interconnect
- 36 ports QDR IB switch
- 18x internal connections
- 18x external connections
UCM
1x Chassis Management Module (CMM)
- OPMA board
- 24 ports GbE switch
- 18x internal ports to Blades
- 3x external ports
1x optional Ethernet Switch Module
(ESM)
- 24ports GbE switch
- 18x internal ports to Blades
- 3x external ports
1x optional Ultra Capacitor Module
(UCM)
53
©Bull, 2010
Bull Extreme Computing
bullx blade system – blade block diagrams
bullx B500 compute blade
bullx B505 accelerator blade
Nehalem EP
Westmere
EP
QPI
54
©Bull, 2010
31.2GB/s
GBE
31.2GB/s
I/O
Controller
(Tylersburg)
SATA PCIe 8x
SSD 4GB/s
diskless
Bull Extreme Computing
12.8GB/s
Each direction
PCIe 16x
8GB/s
QPI
31.2GB/s
I/O
Controller
QPI (Tylersburg)
PCIe 8x PCIe 16x
4GB/s
8GB/s
Accelerator
PCIe 8x
4GB/s
QPI
QPI
Accelerator
SATA
SSD
diskless
I/O Controller
(Tylersburg)
QPI
InfiniBand
31.2GB/s
12.8GB/s
Each direction
InfiniBand
QPI
Nehalem EP
Westmere
EP
Westmere EP
InfiniBand
Westmere EP
GBE
bullx B500 compute blade
Connector
to backplane
WESTMERE EP
w/ 1U heatsink
Fans
© CEA
HDD/SSD
1.8"
DDR III (x12)
Tylersburg w/
short heatsink
425
ConnectX
QDR
143.5
iBMC
ICH10
55
©Bull, 2010
Bull Extreme Computing
Ultracapacitor Module (UCM)
NESSCAP Capacitors (2x6)
Embedded protection against short
power outages
Protect one chassis with all its
equipment under load
Up to 250ms
Avoid on site UPS
Board
save on infrastructure costs
save up to 15% on electrical costs
Improve overall availability
Run longer jobs
56
©Bull, 2010
Bull Extreme Computing
Leds
Bull StoreWay Optima1500
750MB/s bandwidth
12 x 4Gbps front-end connections
4 x 3Gbps point to point back-end disk
connections
Supports up to 144 SAS and/or SATA
HDDs:
-
SAS 146GB (15krpm), 300GB (15krpm),
SATA 1000GB (7,2Krpm)
RAID 1, 10, 5, 50, 6, 10, 50, 3,
3DualParity, Triple Mirror
2GB to 4GB cache memory
Windows, Linux, VmWare Interoperability
(SFR for UNIX)
3 Models:
57
Single Controller 2 front-end ports
Dual Controllers 4 front-end ports
Dual Controllers 12 front-end ports
©Bull, 2010
Bull Extreme Computing
CLARiiON CX4-120
UltraScale Architecture
Two 1.2Ghz dual core LV-Woodcrest CPU Modules
6 GB system memory
Connectivity
128 high-availability hosts
Up to 6 I/O modules (FC or ISCSI)
- 8 front-end 1 Gb/s ISCSI host ports max
- 12 front-end 4 Gb/s / 8 Gb/s Fibre Channel host ports
max
- 2 back-end 4 Gb/s Fibre Channel disk ports
Scalability
Up to 1,024 LUNs
Up to 120 drives
58
©Bull, 2010
Bull Extreme Computing
58
CLARiiON CX4-480
UltraScale Architecture
Two 2.2Ghz dual core LV-Woodcrest CPU
Modules
16 GB system memory
Connectivity
256 high-availability hosts
Up to 10 I/O modules (FC or iSCSI at GA)
- 16 front-end 1 Gb/s iSCIS host ports max
- 16 front-end 4 Gb/s / 8 Gb/s Fibre Channel host
-
ports max
8 back-end 4 Gb/s Fibre Channel disk ports
Scalability
Up to 4,096 LUNs
Up to 480 drives
59
©Bull, 2010
Bull Extreme Computing
59
DataDirect Networks S2A 9900
Performance
Single System S2A9900 delivers 6GB/s Reads & Writes
Multiple System Configurations are Proven to Scale beyond
250GB/s
Real-Time, Zero-Latency Data Access, Parallel Processing
Native FC-4, FC-8 and/or InfiniBand 4X DDR
Capacity
Single System: Up to 1.2 Petabytes
Multiple Systems Scale Beyond: 100’s of Petabytes
Intermix SAS & SATA in Same Enclosure
Manage up to 1,200 Drives
1.2PB in Just Two Racks
Innovation
High Performance DirectRAID™ 6
Zero Degraded Mode
SATAssure™ PlusData Integrity Verification & Drive Repair
Power Saving Drive Spin-down with S2A SleepMode
Power Cycle Individual Drives
60
©Bull, 2010
Bull Extreme Computing
bullx R422 E2 characteristics
1U rackmount – 2 nodes in a 1U
form factor
Intel S5520 Chipset (Tylersburg)
QPI up to 6.4 GT/S
Processor: 2x Intel® Xeon® 5600 per
node
Memory: 12 x DIMM sockets Reg
ECC DDR3 1GB / 2GB / 4GB / 8GB
-
Up to 96 GB per node at 1333MHz
(with 8GB DIMMs)
1x shared Power Supply Unit 1200w
max
-
InfiniBand
-
©Bull, 2010
1 external IB, 1 COM port, VGA, 2
Gigabit NIC, 2 USB ports (per node)
Management
Independent power control circuitry
built in for power management
61
1 PCI-E x 16 Gen2 (per node)
Rear I/O
-
Hot swap SATA2 drives @7.2k rpm
250/500/750/1000/1500/2000 GB
1 optional on-board DDR or QDR
controller per node
Expansion slots
2 x HDD per node
-
fixed / no redundancy
80 PLUS Gold
Bull Extreme Computing
-
BMC (IPMI 2.0 with virtual media-overLAN) Embedded Winbond WPCM450R (per node)
bullx R423 E2
The perfect server for service nodes

2U rack mount
 Processor: 2x Intel® Xeon® 5600
 Chipset: 2x Intel® S5520
(Tylersburg)
 QPI: up to 6.4 GT/s
 Memory: 18 DIMM sockets DDR3
- up to 144GB at 1333MHz
Disks
Without add-on adapter :
6 SATA2 HDD (7200 tpm - 250/ 500/ 750/
1000/ 1500/ 2000 Go)
With PCI-E RAID SAS/SATA add-on
adapter :
Support of RAID 0, 1, 5, 10
8 SATA2 (7200 tpm - 250/ 500/ 750/ 1000/
1500/ 2000 Go) or SAS HDD (15000 tpm 146/ 300/ 450 Go)
All disks 3.5 inches
Expansion slots (low profile)
-
2 PCI-E Gen 2 x16
4 PCI-E Gen 2 x8
1 PCI-E Gen 2 x4
Redundant Power Supply Unit
Matrox Graphics MGA G200eW
embedded video controller
Management
-
BMC (IPMI 2.0 with virtual media-over-LAN)
Embedded Winbond WPCM450-R on
dedicated RJ45 port
WxHxD: 437mm X 89mm x 648mm
62
©Bull, 2010
Bull Extreme Computing
Bull System Manager Suite
Consistent administration
environment thanks to the
cluster database
Ease of use through
-
centralized monitoring
fast and reliable
deployment
configurable notifications
Built from the best Open
Source and commercial
software packages
-
63
©Bull, 2010
integrated
tested
supported
Bull Extreme Computing
Architecture
drawing
Logical
NetList
Equipment
and IP@
Description
Physical
NetList
Cable
Labels
Generator
Preload File
Installer
64
©Bull, 2010
Bull Extreme Computing
Factory
Customized
Clusters
R&D
Expertise Centre
Detailed knowledge of cluster structure
Standard
Clusters
Preload File
Model « B »
Preload File
Model « A »
Product descriptions
bullx blade system
Bullx supernodes
bullx rack-mounted systems
NVIDIA Tesla Systems
Bull Storage
Cool cabinet door
mobull
bullx cluster suite
Windows HPC Server 2008
65
©Bull, 2010
Bull Extreme Computing
bullx blade system – overall concept
66
©Bull, 2010
Bull Extreme Computing
bullx blade system – overall concept
Uncompromised performances
General purpose, versatile
-
Xeon Westmere processor
12 memory slots per blade
Local HDD/SSD or Diskless
IB / GBE
RH, Suse, Win HPC2008, CentOs, …
Compilers: GNU, Intel, …
High density
-
67
7U chassis
18x blades with 2 proc, 12x DIMMs,
HDD/SSD slot/IB connection
1x IB switch (36 ports)
1x GBE switch (24p)
Ultracapacitor
©Bull, 2010
-
Support of high frequency Westmere
Memory bandwidth: 12x mem slots
Fully non blocking IB QDR interconnect
Up to 2.53 TFLOPS per chassis
Up to 15.2 TFLOPS per rack (with
CPUs)
Leading edge technologies
-
Intel Nehalem
InfiniBand QDR
Diskless
GPU blades
Optimized Power consumption
-
Typical 5.5 kW / chassis
High efficiency (90%) PSU
Smart fan control in each chassis
Smart fan control in water-cooled rack
Ultracapacitor  no UPS required
Bull Extreme Computing
bullx chassis packaging
7U chassis
LCD
unit
CMM
PSU x4
18x blades
ESM
68
©Bull, 2010
Bull Extreme Computing
bullx B505 accelerator blade
2.1 TFLOPS
0.863kw
Embedded Accelerator for high performance with high energy efficiency
7U
• 2 x Intel Xeon 5600
• 2 x NVIDIA T10(*)
• 2 x IB QDR
18.9 TFLOPS in 7 U
69
©Bull, 2010
Bull Extreme Computing
(*) T20 is on the roadmap
bullx B505 accelerator blade




Double-width blade
2 NVIDIA Tesla M1060 GPUs
2 Intel® Xeon® 5600 quad-core CPUs
1 dedicated PCI-e 16x connection for
each GPU
 Double InfiniBand QDR connections
between blades
2 x CPUs
Front view
70
©Bull, 2010
2 x GPUs
Exploded view
Bull Extreme Computing
Product descriptions
bullx blade system
bullx supernodes
Bullx rack-mounted systems
NVIDIA Tesla Systems
Bull Storage
Cool cabinet door
mobull
bullx cluster suite
Windows HPC Server 2008
71
©Bull, 2010
Bull Extreme Computing
bullx supernode
An expandable SMP node for memory-hungry applications
SMP of up to 16
sockets based on Bulldesigned BCS:
RAS features:
• Self-healing of the QPI
and XQPI
• Hot swap disk, fans,
power supplies
• Intel Xeon Nehalem
EX processors
• Shared memory of up
to 1TB (2TB with 16GB
DIMMS)
Available in 2 formats:
Green features:
• High-density 1.5U
compute node
• High I/O connectivity
node
72
©Bull, 2010
• Ultra Capacitor
• Processor power
management features
Bull Extreme Computing
bullx supernode: CC-NUMA server
BCS
SMP (CC-NUMA)
128 cores
Up to 1TB RAM
(2TB with 16 GB DIMMs)
©Bull, 2010
NHM
EX
NHM
EX
BCS
BCS
1
73
NHM
EX
BCS
IOH
BCS
NHM
EX
Max configuration
- 4 modules
- 4 sockets/module
- 16 sockets
- 128 cores
- 128 memory slots
Bull Extreme Computing
IOH
Bull's Coherence Switch (BCS)
Heart of the CC-NUMA
- Insure global memory and cache coherence
- Optimize traffic and latencies
- MPI collective operations in HW
-
Reductions
Synchronization
Barrier
Key characteristics
- 18x18 mm in 90 nm technology
- 6 QPI and 6 XQPI
- High speed serial interfaces up to 8GT/s
- Power-conscious design with selective power-down capabilities
- Aggregate data rate: 230GB/s
74
©Bull, 2010
Bull Extreme Computing
bullx S6030 – 3U Service Module / Node
BCS
Ultra-capacitor
Nehalem EX
(4 max)
Fans
Hot swap
Disks (8 max)
Hot swap
SATA RAID 1
SAS RAID 5
2 Power supplies
Hot swap
75
©Bull, 2010
6 PCI-e
1-x16, 5-x8
Bull Extreme Computing
32 DDR3
bullx S6010 - Compute Module / Node
3U
BCS
2 Modules (64 cores / 512GB RAM)
Nehalem EX
(4 max)
1 PCI-e x16
1 Power supply
Ultra-capacitor
Disk SATA
32 DDR3
Fans
76
©Bull, 2010
Bull Extreme Computing
Product descriptions
bullx blade system
bullx supernodes
bullx rack-mounted systems
NVIDIA Tesla Systems
Bull Storage
Cool cabinet door
mobull
bullx cluster suite
Windows HPC Server 2008
77
©Bull, 2010
Bull Extreme Computing
bullx rack-mounted systems
A large choice of options
78
©Bull, 2010
Enhanced connectivity
and storage
 2U
 Xeon 5600
 2-Socket
 18 DIMMs
 2 PCI-Express x16 Gen2
 Up to 8 SAS or SATA2
HDD
 Redundant 80 PLUS
Gold power supply
 Hot-swap fans
Bull Extreme Computing
VISUALIZATION
2 nodes in 1U
for unprecedented density
NEW: more memory
 Xeon 5600
 2x 2-Socket
 2x 12 DIMMs
 QPI up to 6.4 GT/s
 2x 1 PCI-Express x16 Gen2
 InfiniBand DDR/QDR
embedded (optional)
 2x 2 SATA2 hot-swap HDD
 80 PLUS Gold PSU
R425 E2
R423 E2
SERVICE NODE
COMPUTE NODE
R422 E2
Supports latest graphics
& accelerator cards
 4U or tower
 2-Socket
 Xeon 5600
 18 DIMMs
 2 PCI-Express x16 Gen2
 Up to 8 SATA2 or SAS
HDD
 Powerful power supply
 Hot-swap Fans
bullx R425 E2
For high performance visualization

4U / tower rack mount
 Processor: 2x Intel® Xeon® 5600
 Chipset: 2x Intel® S5520
(Tylersburg)
 QPI up to 6.4 GT/s
 Memory: 18 DIMM sockets DDR3
- up to 144GB at 1333MHz
Disks
Without add-on adapter :
6 SATA2 HDD (7200 tpm - 250/ 500/ 750/ 1000/
1500/ 2000 Go)
With PCI-E RAID SAS/SATA add-on adapter :
Support of RAID 0, 1, 5, 10
8 SATA2 (7200 tpm - 250/ 500/ 750/ 1000/ 1500/
2000 Go) or SAS HDD (15000 tpm - 146/ 300/
450 Go)
All disks 3.5 inches
Expansion slots (high profile)
-
2 PCI-E Gen 2 x16
4 PCI-E Gen 2 x8
1 PCI-E Gen 2 x4
Powerful Power Supply Unit
Matrox Graphics MGA G200eW
embedded video controller
Management
-
BMC (IPMI 2.0 with virtual media-over-LAN)
Embedded Winbond WPCM450-R on
dedicated RJ45 port
WxHxD: 437mm X 178mm x 648mm
79
©Bull, 2010
Bull Extreme Computing
Product descriptions
bullx blade system
bullx rack-mounted systems
bullx supernodes
NVIDIA Tesla Systems
Bull Storage
Cool cabinet door
mobull
bullx cluster suite
Windows HPC Server 2008
80
©Bull, 2010
Bull Extreme Computing
GPU accelerators for bullx
NVIDIA® Tesla™ computing systems: teraflops many-core processors
that provide outstanding energy efficient parallel computing power
NVIDIA Tesla C1060
NVIDIA Tesla S1070
To turn an R425 E2 server into a
supercomputer
 Dual slot wide card
 Tesla T10P chip
 240 cores
 Performance: close to 1 Tflops (32 bit FP)
 Connects to PCIe x16 Gen2
81
©Bull, 2010
The ideal booster for R422 E2 or
S6030 -based clusters
 1U drawer
 4 x Tesla T10P chips
 960 cores
 Performance: 4 Tflops (32 bit FP)
 Connects to 2 PCIe x16 Gen2
Bull Extreme Computing
Ready for future Tesla processors (Fermi)
Tesla C2070
Performance
520-630 Gigaflop DP
6 GB Memory
ECC
Tesla C2050
Large
Datasets
520-630 Gigaflop DP
3 GB Memory
ECC
Tesla C1060
8x Peak DP
Performance
933 Gigaflop SP
78 Gigaflop DP
4 GB Memory
Mid-Range Performance
Q4
2009
82
©Bull, 2010
Q1
Q2
Q3
2010
Bull Extreme Computing
Disclaimer: performance specification may change
Q4
Ready for future Tesla 1U Systems (Fermi)
Tesla S2070
Performance
2.1-2.5 Teraflop DP
6 GB Memory / GPU
ECC
Tesla S2050
Large
Datasets
2.1-2.5 Teraflop DP
3 GB Memory / GPU
ECC
Tesla S1070-500
4.14 Teraflop SP
345 Gigaflop DP
4 GB Memory / GPU
8x Peak DP
Performance
Mid-Range Performance
Q4
2009
83
©Bull, 2010
Q1
Q2
Q3
2010
Bull Extreme Computing
Disclaimer: performance specification may change
Q4
NVIDIA Tesla 1U system & bullx R422 E2
Tesla 1U system connection to the host
PCIe Gen2 Host
Interface Cards
bullx R422 E2
server
PCIe Gen2
Cables
PCIe Gen2 Cables
1U
1U
PCIe Gen2
Host Interface
Cards
NVIDIA Tesla
1U system
84
©Bull, 2010
Bull Extreme Computing
Product descriptions
bullx blade system
bullx rack-mounted systems
bullx supernodes
NVIDIA Tesla Systems
Bull Storage
Cool cabinet door
mobull
bullx cluster suite
Windows HPC Server 2008
85
©Bull, 2010
Bull Extreme Computing
Bull Storage for HPC clusters
A rich
management suite
A complete line of
storage systems
• Monitoring
• Grid & standalone
system deployment
• Performance
analysis
• Performance
• Modularity
• High Availability*
*: with Lustre
86
©Bull, 2010
Bull Extreme Computing
Bull Storage Systems for HPC - details
Optima1500
CX4-120
CX4-480
S2A 9900
couplet
#disk
144
120
480
1200
Disk type
SAS 146/300/450 GB
SATA 1TB
FC 146/300/400/450 GB
SATA 1TB
RAID
R1, 3, 3DP, 5, 6, 10, 50
and TM
R0, R1, R10, R3, R5, R6
R0, R1, R10, R3, R5, R6
8+2 (RAID 6)
Host ports
2/12 FC 4
4/12 FC4
8/16 FC4
8 FC4
Back end ports
2 SAS 4X
2
8
20 SAS 4X
Cache size
(max)
4 GB
6GB
16GB
5 GB
RAID-protected
Controller size
2 U base with disks
3U
3U
4U
(couplet)
Disk drawer
2U
12 slots
3U
15 slots
3U
15 slots
3/2/4 U
16/24/60 slots
R: up to 900 MB/s
W: up to 440 MB/s
R: up to 720 MB/s
W: up to 410 MB/s
R: up to 1.25 GB/s
W: up to 800 MB/s
R&W: up to 6 GB/s
Performance
(MB/s; Raid5)
R: Read; W:Write
87
©Bull, 2010
Bull Extreme Computing
FC 10Krpm 400 GB
15Krpm 146/300/450 GB
SATA 1TB
SAS 15Krpm 300/450/600GB
SATA 500/750/1000/2000 GB
Bull storage systems - Administration & monitoring
HPC-specific Administration Framework
Specific administration commands developed on CLI:
-
ddn_admin, nec_admin, dgc_admin, xyr_admin
Model file for configuration deployment:
-
LUNs information, Access Control information, etc.
Easily Replicable for many Storage Subsystems
HPC specific Monitoring Framework
Specific SNMP trap management
Periodic monitoring of all Storage Subsystems in cluster
Storage Views in Bull System Manager HPC edition:
-
88
©Bull, 2010
Detailed status for each item (power supply, fan, disk, FC port, Ethernet
port, etc.)
LUN/zoning information
Bull Extreme Computing
Product descriptions
bullx blade system
bullx supernodes
bullx rack-mounted systems
NVIDIA Tesla Systems
Bull Storage
Cool cabinet door
mobull
bullx cluster suite
Windows HPC Server 2008
89
©Bull, 2010
Bull Extreme Computing
Bull Cool Cabinet Door
Innovative Bull design
-
‘Intelligent’ door (self regulates fan
speed depending on temperature)
survives handily fan or water incidents
(fans increase speed and extract hot air)
optimized serviceability
A/C redundancy
Side benefits for customer
-
no more hot spots in computer room –
Good for overall MTBF !
Ready for upcoming Bull Extreme
Computing systems
-
90
40kW is perfect match for a rack
configured with bullx blades or future
SMP servers at highest density
©Bull, 2010
Bull Extreme Computing
Jülich Research Center: water-cooled system
91
©Bull, 2010
Bull Extreme Computing
Cool cabinet door: Characteristics
Width
Height
Depth
Weight
Cooling capacity
Power supply
Power consumption
Input water temperature
Output water temperature
Water flow
Ventilation
Recommended
cabinet air inlet
Cabinet air outlet
Management
92
©Bull, 2010
600mm (19”)
2020mm (42U)
200mm (8”)
150 kg
Up to 40 kW
Redundant
700 W
7-12 °C
12-17 °C
2 liter/second (7 m3/hour)
14 managed multi-speed fans
20°C +- 2°C
20°C +- 2°C
Integrated management board for local regulation
and alert reporting to Bull System Manager
Bull Extreme Computing
Cool Cabinet Door: how it works
93
©Bull, 2010
Bull Extreme Computing
Flexible operating conditions
Operating parameters adaptable to
various customer conditions
Energy savings further optimized
depending on servers activity
Next step:
Mastering water distribution
-
-
94
Predict Temperature, flow velocity,
pressure drop within customer water
distribution system
Promote optimized solution
©Bull, 2010
Bull Extreme Computing
Product descriptions
bullx blade system
bullx supernodes
bullx rack-mounted systems
NVIDIA Tesla Systems
Bull Storage
Cool cabinet door
mobull
bullx cluster suite
Windows HPC Server 2008
95
©Bull, 2010
Bull Extreme Computing
Product descriptions
bullx blade system
bullx supernodes
bullx rack-mounted systems
NVIDIA Tesla Systems
Bull Storage
Cool cabinet door
mobull
bullx cluster suite
Windows HPC Server 2008
96
©Bull, 2010
Bull Extreme Computing
bullx cluster suite benefits
cluster
suite
With the features developed and optimized by Bull, your HPC cluster is all that a production
system should be: efficient, reliable, predictable, secure, easy to manage.
 Fast deployment
 Reliable management
 Intuitive monitoring
 Powerful control tools
Save administration time
Help prevent system downtime
 Scalability of parallel application performance
 Standard interface compatible with many
interconnects
 Improve system productivity
 Increase system flexibility
File system:
Lustre
 Improved I/O performance
 Scalability of storage
 Easy management of storage system
Improve system performance
Provide unparalleled flexibility
Interconnect
 Cutting edge InfiniBand stack support
 Improve system performance
 Fast development and tuning of applications
 Easy analysis of code behaviour
 Better performance on memory-intensive
applications
 Easy optimization of applications
 Help get the best performance and
thus the best return on investment
Cluster
Management
MPI
Development tools
Kernel debugging
and optimization tools
SMP & NUMA
architectures
Red Hat
distribution
97
©Bull, 2010
 Save development and optimization
time
 Optimized performance
 Reliable and predictable performance
 Improve system performance
 Standard
 Application support / certification
 Large variety of supported
applications
Bull Extreme Computing
cluster
suite
bullx cluster suite components
System environment
Application environment
Installation/configuration
Execution environment
Monitoring/control/diagnostics
Job scheduling
Lustre
config
Ksis
Nscontrol
// cmds
Nagios
Ganglia
Resource
management
Development File systems
Libraries
& tools
MPIBull2
Interconnect access layer (OFED,…)
Bull System Manager cluster database
Linux OS
Administration network
HPC interconnect
Linux kernel
Hardware
XPF SMP platforms
GigE network switches
98
©Bull, 2010
Lustre
NFSv4
NFSv3
InfiniBand/GigE interconnects
Bull StoreWay/disk arrays
Bull Extreme Computing
Product descriptions
bullx blade system
bullx supernodes
bullx rack-mounted systems
NVIDIA Tesla Systems
Bull Storage
Cool cabinet door
mobull
bullx cluster suite
Windows HPC Server 2008
99
©Bull, 2010
Bull Extreme Computing
Bull and Windows HPC Server 2008
Clusters of bullx R422 E2 servers
- Intel® 5500 processors
- Compact rack design: 2 compute nodes in 1U or
18 compute nodes in 7U, depending on model
- Fast & reliable InfiniBand interconnect
supporting
Microsoft® Windows HPC Server 2008
- Simplified cluster deployment and management
- Broad application support
- Enterprise-class performance and scalability
Common collaboration with leading ISVs to provide
complete solutions
The right technologies to handle
industrial applications efficiently
100
©Bull, 2010
Bull Extreme Computing
Windows HPC Server 2008
Combining the power of the Windows Server platform
with rich, out-of-the-box functionality to help improve the productivity
and reduce the complexity of your HPC environment
Microsoft® Windows
Server® 2008 HPC
Edition
• Support for high
performance
hardware (x64 bit
architecture)
• Winsock Direct
support for
RDMA for high
performance
interconnects
(Gigabit Ethernet,
InfiniBand, Myrinet,
and others)
101
©Bull, 2010
+
Microsoft® HPC
Pack 2008
• Support for Industry
Standards MPI2
• Integrated Job
Scheduler
• Cluster Resource
Management Tools
Bull Extreme Computing
=
Microsoft®
Windows® HPC
Server 2008
• Integrated “out of the
box” solution
• Leverages past
investments in
Windows skills and
tools
• Makes cluster
operation just as
simple and secure as
operating a single
system
A complete turn-key solution
Bull delivers a complete ready-to-run solution
- Sizing
- Factory pre-installed and pre-configured (R@ck’n
Roll)
- Installation, integration in the existing
infrastructure
- 1st and 2nd level support
- Monitoring, audit
- Training
Bull has a Microsoft Competence Center
102
©Bull, 2010
Bull Extreme Computing
bullx cluster 400-W
Enter the world of High Performance Computing with
bullx cluster 400-W running Windows HPC Server 2008
bullx cluster 400-W4
-
4 compute nodes to relieve the
strain on your work stations
bullx cluster 400-W8
-
8 compute nodes to give
independent compute resources to
a small team of users, enabling
them to submit large jobs or
several jobs simultaneously
A solution that combines:
The performance of bullx rack
servers equipped with Intel® Xeon®
processors
The advantages of Windows HPC
Server 2008
-
bullx cluster 400-W16
-
103
©Bull, 2010
16 compute nodes to equip a
workgroup with independent high
performance computing resources
that can handle their global
compute workload
-
Simplified cluster deployment and
management
Easy integration with IT
infrastructure
Broad application support
Familiar development environment
And expert support from Bull’s
Microsoft Competence Center
Bull Extreme Computing
Download