Managing Costs and Accelerating Product

advertisement
High Performance Computing at
Mercury Marine
Arden Anderson
Mercury Marine Product Development and Engineering
Outline
• About Mercury Marine
• Engineering simulation capabilities
• Progression of computing systems
• HPC system cost and justification
• Summary
2
Mercury Marine Founded in Cedarburg, WI in 1939
• Today, USA’s Only Outboard
Manufacturer
• Mercury Marine began as the
Kiekhaefer Corp. in 1939
– Founded by E. Carl Kiekhaefer
• Employs 4,200 People Worldwide
• Mercury acquired by Brunswick
Corporation in 1961
• Fond du Lac, WI campus includes
– Corporate Offices
– Technology Center, R&D Offices
– Outboard Manufacturing (Casting,
Machining, Assembly to Distribution)
– Leader in active recreation: marine
engines, boating, bowling, billiards,
and fitness
Mercury’s 1st Patent
3
The Most Comprehensive Product Offering In
Recreational Marine
Outboard Engines (2.5 hp to 350 hp)
Sterndrive Engines (135 hp to 1250 hp)
All new or updated in last 5 years
All updated to new emissions standard in last year
Land ‘N’ Sea / Attwood
Props / Rigging / P&A
Diversified, Quality Products, Connected to Parent Corporation
4
Outline
• About Mercury Marine
• Engineering simulation capabilities
• Progression of computing systems
• HPC system cost and justification
• Summary
5
Poll Question
3) How many compute cores do you use for your
largest jobs?
a. Less than 4
b. 4-16
c. 17-64
d. More than 64
6
Standard FEA
Fatigue & Hardware Correlation
System Assemblies
with Contact
Non-Linear Gaskets
Sub-Modeling
7
Explicit FEA
• System level submerged object impact
– Method development was presented at the 2008 Abaqus Users Conference
8
CFD
Transient Internal Flow
External Flow
Two Phase Flow
A
Test = 35 MPH CFD = 33 MPH
Cavitation onset
Vessel drag, heave, and pitch
A
Flow distribution correlated
to hardware
Moving mesh propeller
9
Heat Transfer
Enclosure Air Flow &
Component Temperatures
Conjugate Heat Transfer for Temperature
Distribution & Thermal Fatigue
10
Overview of Mercury Marine Design Analysis Group
Experience
•
•
•
•
•
•
•
Simulation Methods
• Structural Analysis
– Implicit Finite Element
– Explicit Finite Element
• Dynamic Analysis
• Fluid Dynamics
• Heat Transfer
• Engine Performance
Aerospace
Automotive and Off-Highway
Composites
Dynamic Impact and Weapons
Gas and Diesel Engine
Hybrid
Marine
Analyst Workstations
•
•
•
•
HPC System
•
•
•
•
•
Pre and post processing
Dual Xeon 5160 (4 core), 3.0 GHz
Up to 16 GB RAM
64 bit Windows XP
11
FEA and CFD solvers
80 core (10 nodes x 8 core/node)
Up to 40 GB RAM per node
InfiniBand switch
Windows HPC Server 2008
Poll Question
3) How many compute cores do you use for your
largest jobs?
a. Less than 4
b. 4-16
This slide is a placeholder for
coming back to poll question
responses
c. 17-64
d. More than 64
12
Outline
• About Mercury Marine
• Engineering simulation capabilities
• Progression of computing systems
• HPC system cost and justification
• Summary
13
Evolution of Computing Systems at Mercury Marine
2004
2005
2007
• Pre and post processing
on Windows PC, 2 GB
RAM
• Updated processing
capabilities with Linux
compute server
• Updated pre-post (2004
PC’s) with 2x2 core Linux
workstations
• Computing on HP Unix
Workstation
– Single CPU
– 4-8GB RAM
– 4 CPU Itanium, 32GB
RAM for FEA
– 6 CPU Opteron for CFD
• $125k server
• ~$200k for 10 boxes
• Increased model size
with larger memory
• Memory limitations on
pre-post and limited
model size
• Parallel processing for
FEA & CFD
• Minimal parallelprocessing (CFD only)
• ~Same number of
processors as previous
system with large
increases in speed and
capability
– 3.0 GHz
– 4-16 GB RAM
• More desktop memory for
pre-processing
• Increased computing by
clustering the new prepost machines
• Small & mid-sized
Standard FEA on prepost machines using
multi-cpu’s
Introduce Windows HPC Server in 2009…
14
2009 HPC Decision
INFLUENCING FACTORS
GOALS
• Emphasis on minimizing
analysis time over maximizing
computer & software utilization
• Cost Conscious
• Limited availability to server
room Linux support
• Reduce large run times by 2.5x
or more
• Easy to implement
• Machine would run only Abaqus
• Ability to handle larger future
runs
• System needs to be supported
by in-house IT support
• Re-evaluate software versus
hardware balance
15
Why Windows HPC Server?
• Limited access to Unix/Linux support group
• Unix/Linux support group has database expertise – little
experience in high performance computing
• HPC projects lower priority than company database
projects
• Larger Windows platform support group
• Benchmarking showed competitive run times
• Intuitive use and easy management
– Job Scheduler
– Add Node Wizard
– Heat Map
16
Mercury HPC System Detail, 2009
• Windows Server 2008 HPC Edition
• 32 Core Server + Head Node
• 4 Compute nodes with 8 cores per node
• 40 GB/Node – 160 GB total
GigE switch
Head Node X3650
Processors: 2 x E5440 Xeon Quad 2.8GHz/12L2/1333bus
Memory: 16 GB 667 MHz
Hard Drives: 6 x 1.0 TB SATA in Raid 10
4 Compute Nodes X3450
Processors: 2 x E5472 Xeon
Quad 3.0Ghz/12L2/1600bus
Memory: 40 GB 800MHz
Drives: 2 x 750Gb SATA RAID 0
17
Outline
• About Mercury Marine
• Engineering simulation capabilities
• Progression of computing systems
• HPC system cost and justification
• Summary
18
Justification
• Request from management to reduce run turn around
time – some run times 1 - 2 weeks as runs have become
more detailed and complex
• Quicker feedback to avoid late tooling changes
• Need to minimize manpower down time
• Large software costs – need to maximize software
investment
19
Budget Breakdown
2009
2010
Manpower
Software
Computers
Other
Manpower
Software
Computers
Other
• Computers are small portion of budget
• Budget skewed towards software over hardware
• Rebalancing hardware/software in 2009 slightly shifted this
breakdown
20
Abaqus Token Balancing
• Previous Abaqus token count was high to enable multiple
simultaneous jobs on smaller machines
• Re-balance tokens from several small jobs to fewer large jobs
Original 45 tokens
1 x 8 CPU
4 x 4 CPU
2 x 16 CPU
1 x 4 CPU
CPU’s
4
8
16
32
New 40 tokens
1 x 32 CPU
2 x 4 CPU
Tokens
8
12
16
21
21
3 x 8 CPU
HPC System Costs (2009)
• System Buy Price with OS: $37,000
• 2 Year Lease Price: $16,000 per year
• Software re-scaled to match new system
• Incremental cost: $7,300 per year
22
Historic Productivity Increases
• Continual improvement in productivity
• Large increases in analysis complexity
Productivity
( Work / Budget )
120
100
80
60
40
20
0
2005
2006
2007
2008
Year
23
2009
2010
Abaqus S4b Implicit Benchmark
• Cylinder Head Bolt-up
• 5,000,000 DOF
• 32 Gb Memory
Run Time in Hours
System
4 CPU
Mercury Itanium Server
8 CPU
16 CPU
32 CPU
0.50
0.61
0.38
1.5
Itanium 1.5 Ghz, Gig-E – 32 Gb
Mercury HPC System
E5472 Xeon 3.0Ghz, Gig-E – 32 Gb/node
24
Mercury “Real World” Standard FEA
• Block + Head + Bedplate
• 8,800,000 DOF
• 55Gb Memory
• Preload + Thermal + Reciprocating Forces
* Picture of Abaqus benchmark S4b
Run Time in Hours (Days)
System
4 CPU
8 CPU
16 CPU
32 CPU
Mercury HPC System
64
37
31
E5472 Xeon 3.0Ghz, Gig-E – 32 Gb/node
(3)
(1.5)
(1.3)
Mercury Itanium Server
213
Itanium 1.5 Ghz, Gig-E – 32 Gb
(9)
25
Mercury “Real World” Explicit FEA
• Outboard Impact
• 600,000 Elements
• dt = 3.5e-8s for 0.03s (857k increments)
Run Time in Hours
System
8 CPU
Mercury Linux Cluster
16 CPU
32 CPU
16
11
58
4 nodes at 2 core/node
Mercury HPC System
29.5
E5472 Xeon 3.0Ghz, Gig-E – 32 Gb/node
26
Outline
• About Mercury Marine
• Engineering simulation capabilities
• Progression of computing systems
• HPC system cost and justification
• Summary
27
Summary
• Mercury HPC has evolved over the last 5 years
• Each incremental step has lead to greater throughput
and increased capabilities that have allowed us to better
meet the demands of a fast paced product development
cycle
• Our latest HPC server has delivered improvements in
run times as high as 8x at a very affordable price
• We expect further gains in meshing productivity as we
re-size runs to the new computing system
28
Progress Continues: Mercury HPC System Detail, 2010 Updates
• Windows Server 2008 HPC Edition
• Add 48 cores to existing server (combined total of 80 cores)
– 6 Compute nodes with 8 cores per node
– 24 GB/Node
• Now running FEA and CFD on HPC system (~70/30 split)
InfiniBand switch
Head Node X3650
Processors: 2 x E5440 Xeon Quad 2.8GHz/12L2/1333bus
Memory: 16 GB 667 MHz
Hard Drives: 6 x 1.0 TB SATA in Raid 10
4 Compute Nodes X3450
6 Compute Nodes X3550
Processors: 2 x E5472 Xeon
Quad 3.0Ghz/12L2/1600bus
Memory: 40 GB 800MHz per node
Drives: 2 x 750 GB SATA RAID 0
Processors: 2 x E5570 Xeon
Quad-core, 3.0Ghz
Memory: 24 GB RAM per node
Drives: 2 x 500 GB SATA RAID 0
29
Thank You. Questions?
30
Contact Info and Links
• Arden Anderson
– arden.anderson@mercmarine.com
• Microsoft HPC Server Case Study
– http://www.microsoft.com/casestudies/Windows-HPC-Server2008/Mercury-Marine/Manufacturer-Adopts-Windows-ServerBased-Cluster-for-Cost-Savings-Improved-Designs/4000008161
• Crash Prediction for Marine Engine Systems at
2008 Abaqus Users Conference
– Available by searching conference archives for Mercury Marine:
http://www.simulia.com/events/search-ucp.html
31
Download