Uploaded by Jorge Mario Vergara Macías

04042023 MegaDC System Portfolio - Telefonica

advertisement
Confidential
MegaDC System Portfolio
-Mt. Hamilton 1-Socket Ampere
System Design Team – Huy Vu
04 / 04 / 2023
Better Faster Greener™ © 2023 Supermicro
MegaDC Building Block for 1-Socket Enterprise
Edge Compute, 17-inch
•
•
•
•
•
6x NVMe
2x FHFL PCIe/Accelerator
2x FHHL PCIe/Accelerator
1x LP PCIe
OCP 3.0 NIC
Mt. Hamilton
Program
Confidential
Gaming/AI, 2U4GPU
•
•
•
•
4x NVMe
4x DW GPU
1x LP PCIe
OCP 3.0 NIC
Common MB
Front End Server, 1U10NVMe
• 10x NVMe
• 3x LP PCIe
• OCP 3.0 NIC
Database, 2U12Bay
Storage, 1U Drawer (OEM)
•
12x LFF by BCM3916
•
4x E1.S
•
2x NVMe
•
3x LP PCIe
•
OCP 3.0 NIC
Database, 2U12Bay (OEM)
7/7/2023
•
•
•
•
•
•
•
•
•
2
12x LFF by BCM3816
4x NVMe (optional)
3x LP PCIe
OCP 3.0 NIC
12x LFF by BCM3816
2x NVMe (optional)
2x DW GPU
1x LP PCIe
OCP 3.0 NIC
Better Faster Greener™ © 2023 Supermicro
A M P E R E
PA R T N E R
CONFIDENTIAL
Predictability of Performance
Redis p.99 latencies - Lower is Better
p.99 Latency (microseconds)
1400
Predictable Performance is critical in
SMT kicks in
1200
1 ms p.99 SLA
1000
800
The effect of Turbo
Intel Xeon "Icelake" 8380
Ampere® Altra® Q80-30
multi-tenant cloud environments.
Stringent Latency SLAs are the norm
600
for most deployments.
400
Ampere® Altra® and Ampere® Altra®
200
Max demonstrate extremely consistent
0
0%
20%
40%
60%
Platform Utilization Levels
80%
100%
and predictable p.99 latencies for
cloud native applications like Redis.
The above graph tracks the p.99 latencies of a multi-instance Redis setup
as the number of instances (and the platform utilization) are increased.
The memtier_benchmark load generator is used with the following command line:
memtier_benchmark -p <port> --clients=1 --threads=1 --pipeline=1 --ratio=1:100 --key-pattern=R:R --data-size=1048576 --test-time=30 --out-file <file>
SPEC2017_rate Integer Base (Estimated)
Relative Performance
Compelling Performance, Low Power Consumption, Overall Energy Efficiency Leader!
350%
40% Lower Power at
High Energy Efficiency Zone
similar Performance
300%
AMD EPYC Milan
250%
Ampere® Altra® Max
200%
150%
Intel Xeon Icelake
2x Performance
100%
at similar Power Levels
50%
0%
100
120
140
160
180
200
220
240
Package Power (W)
gcc 10.2, flags used: -O3 –flto=32 –m64 -march=<neoverse-n1/icelake-server/znver2>
260
280
Ampere® Altra® Max Energy Efficiency
AMPERE ALTRA MAX M128-30 CPU FREQUENCY AND POWER
SPECRATE2017_INT BASE ESTIMATED
3.5E+09
CPU Frequency (Hz)
240
2.5E+09
TDP
Headroom
200
2E+09
160
Power fluctuates
depending on test
1.5E+09
120
1E+09
80
500000000
40
0
Performance
(SpecRate2017
_INT Base)
Usage
Power (W)
Performance
/Watt
AMD EPYC
Milan
331
280W
1.0x
Ampere®
Altra® Max
360
178W
1.71x
320
280
3E+09
TDP=250W
Power
CPU Frequency
Consistently running
at max frequency
CPU Power Consompution (W)
Ampere® Altra® Max
0
Time (secs)
AMD EPYC Milan
TDP=280W
AMD EPYC 7763 CPU FREQUENCY AND POWER
SPECRATE2017_INT BASE ESTIMATED
Exceeds TDP at times and
fluctuates depending on test
4,000,000,000
300
CPU Frequency (Hz)
3,500,000,000
250
Ampere® Altra® Max maintains predictable
core frequencies while consuming lower
power (below TDP)
3,000,000,000
200
2,500,000,000
2,000,000,000
150
1,500,000,000
100
1,000,000,000
Cannot maintain
max frequencies
500,000,000
0
50
0
Time (secs)
Power headroom means workload-driven
power capping can lead to huge density
improvements!
Compelling performance/Watt at
competitive levels of performance
Ampere: Leadership Performance for Cloud Workloads
Highest Performance and Power Efficiency Across Key Cloud Workloads(1)(2)
Web Services
Database
In-Memory Caching
Media Transcoding
AI Inference
(NGINX)(3)
(MySQL)(3)
(Redis)(3)
(h.264)(3)
Image Classification (ResNet-50)(4)
2.8x
2.0x
2.4x
3.8x
2.9x
13%
Intel
Ice Lake
AMD
Milan
Ampere®
Altra® Max
Intel
Ice Lake
AMD
Milan
Ampere®
Altra® Max
Intel
Ice Lake
Performance/W
Performance
Notes:
1. Based on Company benchmarking
2. Intel Ice Lake represents Intel 8380 SKU; AMD Milan represents AMD 7763 SKU.
3. Percentages represent AMD Milan and Ampere® Altra® Max indexed against Intel Ice Lake
4. Percentages represent Intel Ice Lake and Ampere® Altra® Max indexed against AMD Milan
AMD
Milan
Ampere®
Altra® Max
Intel
Ice Lake
AMD
Milan
Ampere®
Altra® Max
AMD Milan
AMD
Milan
Intel Ice LakeAmpere Altra Max
Intel
Ice Lake
Ampere®
Altra® Max
Throughput (Higher is Better)
Latency (Lower is Better)
Data Center Power Consumption is Rising
2030
2020
Legacy CPU
Power Consumption
Increasing
1-2%
Global
2
Electricity Demand1
2-4X ↑
Data Centers are increasingly unwelcome neighbors:
Ireland
Amsterdam
Singapore
Recent Limits & Moratoriums on DC Expansion
Frankfurt
London
Server Efficiency is Fundamental to Sustainable Growth
Projected
Legacy Approach (x86)
Cloud Native Ampere Approach 1
2025
Server
Power
↑ 2.0x
↓ 0.8x
2025
DC Real
Estate
↑ 1.6x
↓ 0.7x
Ampere: Sustainability at the Core
• Industry Leading Performance
• Industry Leading Power Efficiency
• Giving back through Open Standards
Notes:
1. Ampere internal models and analysis to identify total compute demand, power consumption numbers, and real estate footprint for legacy and Ampere processors in 2025.
Rack Efficiency Using Ampere Cloud Native Processors
Based on 42U rack @12.8 kW
Performance per Rack1
Workload
Stranded
Rack
Capacity
Stranded
Rack
Capacity
Intel
AMD
Ampere
SIR2017 Est.
1X
1.4X
2X
Redis
1X
1.5X
2.6X
NGINX
1X
1.7X
3.5X
x.2642
1X
1.7X
2.25X
Cassandra
1X
1.1X
1.8X
1200
1792
4864
15
14
38
Cores
Servers
Intel Ice Lake
8380
AMD Milan
7763
Ampere Altra Max
M128-26
Use 2-3X Fewer Racks vs Legacy x86 for Equivalent Performance
Notes:
1. Ampere internal models and analysis to identify total compute performance and system usage power consumption numbers, in standard 42U 12.8kW rack, see end notes
2. Data point uses data taken on M128-30 whereas all other data points use the M128-26.
Ampere’s Expanding Software & Provider Ecosystem
Broad Developer Ecosystem with 165+ Software Applications Undergoing Daily Automated Functionality and Performance Testing
Applications
Databases
Infrastructure Tools & DevOps
Networking & Storage
Languages & Runtimes
Orchestration, Virtualization & Containers
Operating Systems
Cloud Infrastructure Providers
Verified Linux Operating Systems
Alma 8.5
Debian 11
Fedora 35
Oracle Linux 8.5
SOC Certified
RHEL 8.5
Rocky 8.5
SLE SP3
Ubuntu 20.04
SOC Certified
LP = Low-profile PCIe add-in card or HHHL half-height half-length
FHHL = Full height half-length PCIe add-in card
FHFL = Full height Full Length PCIe add-in card
DW = Double Wide add-in card
SW = Single Wide add-in card
Ampere® Mt. Hamilton 1-Socket MegaDC
Mt. Hamilton
(Altra / Altra Max)
Up to 128 core
PCIe G4 / DDR4
Form Factor / Height
Application
System Part Number
Max CPU TDP
Memory
PCIe Gen4 Riser Configs
OCP Mezzanine
Cabled PCIe
Security
Host/CPU Network
BMC Network
Storage Drive Bay
Options
On-board Storage
Thermal fan
BMC
Power
Firmware Support
OpenSource Support
1U
Front-End Server
ARS-110M-NR
2U
Object Storage
ARS-510M-ACR12N4H
Up to 250W
3x LP
3x LP
(One occupied by 3916)
10x SFF U.2 NVMe
12x LFF SAS/SATA
4x E1.S and 2x NVMe
6x 40mmx40mmx56mm
Dual 80 PLUS Titanium Redundant PSUs, up to 860W
with PMBus 1.2 support
Immersive Media
ARS-210M-NR
Telco Edge
ARS-210ME-FNR
Cloud Video Streaming
ARS-520M-NRG
Up to 250W
16-DIMMs DDR4-3200 (8-channels, 2DPC), Up to 4TB
4x DW GPU (7x FHFL SW)
4x FHHL SW
2x DW GPU and 1x LP
and 1x LP, Front OCP
and 1x LP
SATA/SAS support by 3816
OCP 3.0 NIC, up to PCIe Gen4 x16
6x PCIe Gen4 x8 SlimSAS cabled for NVMe or Riser option
TPM 2.0 Header, Root-of-Trust (RoT) (Option)
CX4 Lx-EN for 2x 25Gb SFP28 w/ NC-SI
Realtek RTL8211E 1GbE
16 (24)x SFF NVMe option
6x SFF U.2 NVMe
12x LFF
(E1.S/E3.S option module)
(Optional to add 2x NVMe)
Database
ARS-520M-NRL
3x LP
SATA/SAS support by 3816
12x LFF
(Optional to add 4x NVMe)
1x PCIe Gen4 x4, M.2 (2280/22110)
4x 80mmx80mmx38mm
3x 80mmx80mmx38mm
AST2600
Dual CRPS 80 PLUS Titanium Redundant PSUs, 2000/1600/1200/1000W with PMBus 1.2 support
UEFI: AMI AptioV & BMC: OpenBMC
Base code available
Ampere® Roadmap: Innovating at an Annual Cadence
Mt. Hamilton
Mt. Kim
Ampere®
AmpereOneX
(5nm)
Ampere®
AmpereOne ®
(5nm)
Ampere®
Altra® Max
(7nm)
Ampere® Altra®
(7nm)
128 Cores
DDR4
PCIeGen4
Ampere Cores
Ampere Cores
Arm ISA Compliant
Memory Bandwidth
IO/Network Bandwidth
Arm ISA Compliant
Memory Bandwidth
IO/Network Bandwidth
80 Cores
DDR4
PCIeGen4
2021
2022
2023
2024
LP = Low-profile PCIe add-in card or HHHL half-height half-length
FHHL = Full height half-length PCIe add-in card
FHFL = Full height Full Length PCIe add-in card
DW = Double Wide add-in card
SW = Single Wide add-in card
Ampere® Mt. Kim 1-Socket MegaDC
Mt. Kim
AmpereOne (Siryn)
Up to 192 core
PCIe G5 / DDR5
Form Factor / Height
Application
System Part Number
Max CPU TDP
Memory
PCIe Gen5 Riser Configs
OCP Mezzanine
Cabled PCIe
Security
Host/CPU Network
BMC Network
Storage Drive Bay
Options
On-board Storage
Thermal fan
BMC
Power
Firmware Support
OpenSource Support
1U
Front-End Server
ARS-111M-NR
2U
Object Storage
ARS-511M-ACR12N4H
Up to 250W
3x LP
3x LP
(One occupied by 3916)
10x SFF U.2 NVMe
12x LFF SAS/SATA
4x E1.S and 2x NVMe
6x 40mmx40mmx56mm
Dual 80 PLUS Titanium Redundant PSUs, up to 860W
with PMBus 1.2 support
Immersive Media
ARS-211M-NR
Telco Edge
ARS-211ME-FNR
Cloud Video Streaming
ARS-521M-NRG
Up to 400W
16-DIMMs DDR5-4800 (8-channels, 2DPC), Up to 8TB
4x DW GPU (7x FHFL SW)
4x FHHL SW
2x DW GPU and 1x LP
and 1x LP, Front OCP
and 1x LP
SATA/SAS support by 3816
OCP 3.0 NIC w/NC-SI, up to PCIe Gen5 x16
6x PCIe Gen5 x8 MCIO cabled for NVMe or Riser option
TPM 2.0 Header, Root-of-Trust (RoT) (Option)
BCM57414 for 2x 25Gb SFP28 w/NC-SI
Realtek RTL8211E 1GbE
16 (24)x SFF NVMe option
6x SFF U.2 NVMe
12x LFF
(E1.S/E3.S option module)
(Optional to add 2x NVMe)
Database
ARS-521M-NRL
3x LP
SATA/SAS support by 3816
12x LFF
(Optional to add 4x NVMe)
1x PCIe Gen5 x4, M.2 (2280/22110)
4x 80mmx80mmx38mm
3x 80mmx80mmx38mm
AST2600
Dual CRPS 80 PLUS Titanium Redundant PSUs, 2400/1600/1200/1000W with PMBus 1.2 support
UEFI: AMI AptioV & BMC: OpenBMC
Base code available
OpenBMC Scalable Features
7/7/2023
15
Confidential
Better Faster Greener™ © 2023 Supermicro
OpenBMC Scalable Features (List)
7/7/2023
16
Confidential
Better Faster Greener™ © 2023 Supermicro
Search “Hamilton” in Supermicro Website
Confidential
1st Wave (MP):
Jay, Fischer
2U4GPU,
ARS-210M-NR
1U10NVMe, ARS-110M-NR
2nd Wave (Q1 2023):
Claire, Ivy
2U12Bay, ARS-520M-NRL
2U Edge,
ARS-210ME-FNR
Ampere AM (US)
System PM (US)
Solution Manager
System PM (Taiwan)
5G / Edge Solution
Solution Architect
Jackie Pan
Jay Chang
Claire Lu
Ivy Chen
Huy Vu
Fischer Huang
Kobe Hsiung
Michael Clegg
Vince Chen
Robert Zhu
System Design (US)
Roger Chen
System Design (Taiwan) Tommy Chang
7/7/2023
17
jpan@amperecomputing.com
JayChang@supermicro.com
clairelu@supermicro.com
ivychen@supermicro.com
HuyV@supermicro.com
FischerH@supermicro.com.tw
kobeh@supermicro.com.tw
MichaelClegg@supermicro.com
VinceC@supermicro.com
robertz@supermicro.com
Roger_chen@supermicro.com
TommyC@supermicro.com.tw
Better Faster Greener™ © 2023 Supermicro
Confidential
DISCLAIMER
Super Micro Computer, Inc. may make changes to specifications and product descriptions at any time, without notice. The
information presented in this document is for informational purposes only and may contain technical inaccuracies, omissions
and typographical errors. Any performance tests and ratings are measured using systems that reflect the approximate
performance of Super Micro Computer, Inc. products as measured by those tests. Any differences in software or hardware
configuration may affect actual performance, and Super Micro Computer, Inc. does not control the design or implementation of
third party benchmarks or websites referenced in this document. The information contained herein is subject to change and may
be rendered inaccurate for many reasons, including but not limited to any changes in product and/or roadmap, component and
hardware revision changes, new model and/or product releases, software changes, firmware changes, or the like. Super Micro
Computer, Inc. assumes no obligation to update or otherwise correct or revise this information.
SUPER MICRO COMPUTER, INC. MAKES NO REPRESENTATIONS OR WARRANTIES WITH RESPECT TO THE
CONTENTS HEREOF AND ASSUMES NO RESPONSIBILITY FOR ANY INACCURACIES, ERRORS OR OMISSIONS THAT
MAY APPEAR IN THIS INFORMATION.
SUPER MICRO COMPUTER, INC. SPECIFICALLY DISCLAIMS ANY IMPLIED WARRANTIES OF MERCHANTABILITY OR
FITNESS FOR ANY PARTICULAR PURPOSE. IN NO EVENT WILL SUPER MICRO COMPUTER, INC. BE LIABLE TO ANY
PERSON FOR ANY DIRECT, INDIRECT, SPECIAL OR OTHER CONSEQUENTIAL DAMAGES ARISING FROM THE USE OF
ANY INFORMATION CONTAINED HEREIN, EVEN IF SUPER MICRO COMPUTER, Inc. IS EXPRESSLY ADVISED OF THE
POSSIBILITY OF SUCH DAMAGES.
ATTRIBUTION
© 2023 Super Micro Computer, Inc. All rights reserved.
7/7/2023
18
Better Faster Greener™ © 2023 Supermicro
2U4GPUs, ARS-210M-NR
Confidential
Specifications
Processor Support
•
1x Ampere Altra / AltraMax CPU, Up to 128 Arm v8.2+ 64-bit CPU Cores at 3.00 GHz
Memory Capacity
•
Up to 16 DIMMs (2DPC), 8x 72-bit DDR4-3200 Channels, Up to 4TB of DRAM Memory Support
Expansion
•
•
•
4 PCIe Gen4 x16 Double Wide (DW) GPU Cards (or 7 Single Wide FHFL, Cabled Option)
1 PCIe Gen4 x16 LP Cards
OCP 3.0 Mezzanine Card
Networking & I/O
•
•
Onboard: CX4 Lx-EN 2x 25Gb/s Ethernet, NC-SI Support
BMC: Realtek RTL8211E Manageability Port
Drive Bays
•
•
Front: 4x SFF NVMe U.2 SSD (Option to 16Bay or 24Bay NVMe Support)
Onboard: 1x PCIe Gen4 x4 M.2 NVMe SSD (2280 / 22110)
Storage Controller
•
Optional AOC 3816 card for SAS/SATA Support
System Cooling
•
44
4x 80mm High-Performance PWM Cooling Fans
Power Supply
0 1 2 3
•
55
Dual 80PLUS Titanium Redundant PSUs, 1600W w/ PMBus 1.2 Support
System Management
1
•
•
•
•
•
2
3
A
Dimensions
•
7/7/2023
IPMI 2.0, Redfish, and WebUI, Serial-Over-LAN (SOL), Remote KVM
Hardware Health Monitor
Security: TPM 2.0 Connector
Firmware Support: UEFI – Aptio V, BMC – OpenBMC
BMC: Aspeed AST2600 Baseboard Management Controller
19
2U Form Factor: 17.25” (W) x 3.5” (H) x 25.5” (D)
Better Faster Greener™ © 2023 Supermicro
Boulding Block Solution, ARS-210M-NR
7/7/2023
Confidential
8-Bay Configuration
ARS-210M-NR (8Bay)
PCIe Module Configuration (By Request)
ARS-210M-NR (PCIe)
16-Bay Configuration
ARS-210M-NR (16Bay)
24-Bay Configuration (By Request)
PIO-210M-NR-AC128 (24Bay)
20
Better Faster Greener™ © 2023 Supermicro
17” Edge, ARS-210ME-FNR
Confidential
Specifications
Processor Support
•
1x Ampere Altra / AltraMax CPU, Up to 128 Arm v8.2+ 64-bit CPU Cores at 3.00 GHz
Memory Capacity
•
Up to 16 DIMMs (2DPC), 8x 72-bit DDR4-3200 Channels, Up to 4TB of DRAM Memory Support
Expansion
•
•
•
4 x16 Single Wide FHHL, (1x Double-Wide FHFL GPU support)
1 PCIe Gen4 x16 LP Cards
OCP 3.0 Mezzanine Card
Networking & I/O
•
•
Onboard: CX4 Lx-EN 2x 25Gb/s Ethernet, NC-SI Support
BMC: Realtek RTL8211E Manageability Port
Drive Bays
•
•
Front: 6x SFF NVMe U.2 SSD
Onboard: 1x PCIe Gen4 x4 M.2 NVMe SSD (2280 / 22110)
Storage Controller
•
1
2
4
5
Add-On Card Option
System Cooling
•
4x 80mm High-Performance PWM Cooling Fans w/ Fan Door for 3+1 Redundant Hot Swap
Power Supply
3
A
•
Dual 80PLUS Titanium Redundant PSUs, 1000W w/PMBus 1.2 Support, (-48V DC PSU Option)
System Management
•
•
•
•
•
IPMI 2.0, Redfish, and WebUI, Serial-Over-LAN (SOL), Remote KVM
Hardware Health Monitor
Security: TPM 2.0 Connector
Firmware Support: UEFI – Aptio V, BMC – OpenBMC
BMC: Aspeed AST2600 Baseboard Management Controller
Dimensions
•
7/7/2023
21
2U Form Factor: 17.25” (W) x 3.5” (H) x 16.9” (D)
Better Faster Greener™ © 2023 Supermicro
MegaDC Modular Server Systems
Confidential
ARS-210ME-FNR
17” Edge
55℃ Ambient Temperature
ARS-210M-NR
2U4GPUs
35℃ Ambient Temperature
7/7/2023
22
Better Faster Greener™ © 2023 Supermicro
GPU Applications on the ARM Systems
A100
H100
L40
Confidential
Universal for CGX, OVX, and HGX
Highest GPU Perf Visual Computing
2U4GPU
Edge
7/7/2023
A10
Mainstream GPU with AI
A16
Density VDI, Cloud Gaming
T4
L4
SMB Datacenter and Edge AI
23
Better Faster Greener™ © 2023 Supermicro
1U10NVMe, ARS-110M-NR
Confidential
Specifications
Processor Support
•
1x Ampere Altra / AltraMax CPU, Up to 128 Arm v8.2+ 64-bit CPU Cores at 3.00 GHz
Memory Capacity
•
Up to 16 DIMMs (2DPC), 8x 72-bit DDR4-3200 Channels, Up to 4TB of DRAM Memory Support
Expansion
•
•
3 PCIe Gen4 x16 LP Cards
OCP 3.0 Mezzanine Card
Networking & I/O
•
•
Onboard: CX4 Lx-EN 2x 25Gb/s Ethernet, NC-SI Support
BMC: Realtek RTL8211E Manageability Port
Drive Bays
•
•
Front: 10x SFF NVMe U.2 SSD
Onboard: 1x PCIe Gen4 x4 M.2 NVMe SSD (2280 / 22110)
Storage Controller
•
Add-On Card Option
System Cooling
•
6x 40mm High-Performance PWM Cooling Fans
Power Supply
•
Dual 80PLUS Titanium Redundant PSUs, 860W w/PMBus 1.2 Support
System Management
•
•
•
•
•
1
2
3
Dimensions
A
7/7/2023
IPMI 2.0, Redfish, and WebUI, Serial-Over-LAN (SOL), Remote KVM
Hardware Health Monitor
Security: TPM 2.0 Connector
Firmware Support: UEFI – Aptio V, BMC – OpenBMC
BMC: Aspeed AST2600 Baseboard Management Controller
•
24
1U Form Factor: 17.25” (W) x 1.7” (H) x 23.5” (D)
Better Faster Greener™ © 2023 Supermicro
2U12Bay, ARS-520M-NRL
Confidential
Specifications
Processor Support
•
1x Ampere Altra / AltraMax CPU, Up to 128 Arm v8.2+ 64-bit CPU Cores at 3.00 GHz
Memory Capacity
•
Up to 16 DIMMs (2DPC), 8x 72-bit DDR4-3200 Channels, Up to 4TB of DRAM Memory Support
Expansion
•
•
3 PCIe Gen4 x16 LP Cards
OCP 3.0 Mezzanine Card
Networking & I/O
•
•
Onboard: CX4 Lx-EN 2x 25Gb/s Ethernet, NC-SI Support
BMC: Realtek RTL8211E Manageability Port
Drive Bays
•
•
•
Front: 12x LFF
Rear: 4x NVMe U.2 option
Onboard: 1x PCIe Gen4 x4 M.2 NVMe SSD (2280 / 22110)
Storage Controller
•
Optional by Broadcom 3816 (IT mode), or 3916 HW RAID
System Cooling
•
3x 80mm High-Performance PWM Cooling Fans
Power Supply
•
Dual 80PLUS Titanium Redundant PSUs, 750W w/ PMBus 1.2 Support
System Management
1
•
•
•
•
•
2
3
A
Dimensions
•
7/7/2023
IPMI 2.0, Redfish, and WebUI, Serial-Over-LAN (SOL), Remote KVM
Hardware Health Monitor
Security: TPM 2.0 Connector
Firmware Support: UEFI – Aptio V, BMC – OpenBMC
BMC: Aspeed AST2600 Baseboard Management Controller
25
2U Form Factor: 17.2” (W) x 3.5” (H) x 25.5” (D)
Better Faster Greener™ © 2023 Supermicro
1U Drawer, ARS-510M-ACR12N4H (OEM sku)
Confidential
Specifications
Processor Support
•
1x Ampere Altra / AltraMax CPU, Up to 128 Arm v8.2+ 64-bit CPU Cores at 3.00 GHz
Memory Capacity
•
Up to 16 DIMMs (2DPC), 8x 72-bit DDR4-3200 Channels, Up to 4TB of DRAM Memory Support
Expansion
•
•
3 PCIe Gen4 x16 LP Cards
OCP 3.0 Mezzanine Card
Networking & I/O
•
•
Onboard: CX4 Lx-EN 2x 25Gb/s Ethernet, NC-SI Support
BMC: Realtek RTL8211E Manageability Port
Drive Bays
•
•
•
Drawer: 12x (LFF) SAS/SATA Hot-Swap (see Storage Controller below)
Front: 4x E1.S and 2x NVMe (7mm, U.2) SSD
Onboard: 1x PCIe Gen4 x4 M.2 NVMe SSD (2280 / 22110)
Storage Controller
•
Broadcom 3916+ HW RAID Controller
System Cooling
•
6x 40mm High-Performance PWM Cooling Fans
Power Supply
•
Dual 80PLUS Titanium Redundant PSUs, 860W w/PMBus 1.2 Support
System Management
•
•
•
•
•
IPMI 2.0, Redfish, and WebUI, Serial-Over-LAN (SOL), Remote KVM
Hardware Health Monitor
Security: TPM 2.0 Connector
Firmware Support: UEFI – Aptio V, BMC – OpenBMC
BMC: Aspeed AST2600 Baseboard Management Controller
Dimensions
•
7/7/2023
26
1U Form Factor: 17.6” (W) x 1.7” (H) x 37” (D)
Better Faster Greener™ © 2023 Supermicro
2U2GPU, ARS-520M-NRG (OEM sku)
Confidential
Specifications
Processor Support
•
1x Ampere Altra / AltraMax CPU, Up to 128 Arm v8.2+ 64-bit CPU Cores at 3.00 GHz
Memory Capacity
•
Up to 16 DIMMs (2DPC), 8x 72-bit DDR4-3200 Channels, Up to 4TB of DRAM Memory Support
Expansion
•
•
•
2 PCIe Gen4 x16 Double Wide (DW) GPU Cards (or 4 x16 Single Wide FHFL, Cabled Option)
1 PCIe Gen4 x16 LP Cards
OCP 3.0 Mezzanine Card
Networking & I/O
•
•
Onboard: CX4 Lx-EN 2x 25Gb/s Ethernet, NC-SI Support
BMC: Realtek RTL8211E Manageability Port
Drive Bays
•
•
•
Front: 12x LFF
Rear: 2x NVMe U.2 option
Onboard: 1x PCIe Gen4 x4 M.2 NVMe SSD (2280 / 22110)
Storage Controller
•
Optional by Broadcom 3816 (IT mode), or 3916 HW RAID
System Cooling
•
3x 80mm High-Performance PWM Cooling Fans
Power Supply
•
Dual 80PLUS Titanium Redundant PSUs, 1200W w/PMBus 1.2 Support
System Management
•
•
•
•
•
IPMI 2.0, Redfish, and WebUI, Serial-Over-LAN (SOL), Remote KVM
Hardware Health Monitor
Security: TPM 2.0 Connector
Firmware Support: UEFI – Aptio V, BMC – OpenBMC
BMC: Aspeed AST2600 Baseboard Management Controller
Dimensions
•
7/7/2023
27
2U Form Factor: 17.2” (W) x 3.5” (H) x 25.5” (D)
Better Faster Greener™ © 2023 Supermicro
Confidential
Killer Solutions
• Cloud Gaming
• CDN
• Object Storage
• All-in-One 5G
7/7/2023
28
Better Faster Greener™ © 2023 Supermicro
Comprehensive Portfolio to Popular Cloud Workloads
1S Platforms
Compute
FE Web
1U10NVMe
ARS-110M-NR
Database
Confidential
CPU
Memory
Network
Storage
GPU
Form Factor
M128-30
2DPC/2TB
10GbE, 25GbE
2-4 Drives
Very Low Attach
1U
M128-26
1DPC/1TB
High BW 50Gb+
2 Drives
None
1U
M128-30
2DPC /4TB
High BW 50Gb
24 Drives
None
2U
M128-30
2DPC /1TB
High BW 50Gb
24 Drives
Very Low Attach
2U
Q80-30
2DPC/2TB
10GbE, 25GbE
2-4 Drives
4x A100
2U
M128-30
1DPC/1TB
High BW 50Gb
2-4 Drives
4x A16
8x T4s (OEM)
2U
Cloud Gaming
2U24NVMe
ARS-210M-NR (OEM)
IaaS
Inference
Gaming
2U4GPU
ARS-210M-NR
Object
Storage
2U12bay
ARS-510M-NR
Q64-26
1DPC/1TB
High BW 50Gb
12 LFF Drives
+ 2-4 NVMe
None
2U
Cost Optimize
5G Edge
Edge
ARS-210ME-FNR
Q80-30
1DPC/1TB
High BW 50Gb
IEEE1588 needed
4-6 Drivers
vRAN
Accelerator Cards
2U
17” depth
7/7/2023
29
Better Faster Greener™ © 2023 Supermicro
Confidential
MegaDC Mt. Hamilton 2U4GPUs
ARS-210M-NR
• Cloud Gaming Applications Focus
• AIC (Android in Cloud)
7/7/2023
30
Better Faster Greener™ © 2023 Supermicro
ARS-210M-NR Hardware Platform
AMPERE CPU
Altra / Altra-Max
RC
16
OCP
3.0
RC
16
Slot3
X16
RC
16
Slot1
A16
RC
16
Slot2
A16
RC
16
Slot4
A16
DDR4
16
Slot5
A16
RC
RC
RC
4x4
4
4
1
M
2
L
A
N
B U
M S
C B
Confidential
D
D
R
4
Up to 128 cores
8 CH
16DIMMs
1
NVIDIA A16x 4
H
U
B
NVMe
NVMe
NVMe
NVMe
SuperMicro
ARS-210M-NR
(2U4GPUs)
AICAN with support from Nvidia and Ampere
Up 4x A16
AltraMax 1P + 4x A16 is designed for high
quality games streaming
720P@30fps, 1080P@30fps, 1080P@60fps
PC Graphics Features (DLSS, RTX, etc.)
7/7/2023
Ampere Arm-based Cloud Native Servers
31
Better Faster Greener™ © 2023 Supermicro
AIC by Nvidia SDK, and Canonical – Anbox
Confidential
Stream SDK
NVIDIA
Android Cloud
Library
Capture SDK
Android Game
NVIDIA Android
HAL
NVIDIA
GLES/Vulkan
Android Container
NVIDIA Android
Cloud SDK
Android Container
Optical Flow SDK
Android Container
Cloud Gaming SW
DLSS SDK
Android
container
Cloud Gaming Driver
Android
container
LXD
Ubuntu
NVIDIA GPUs
Ampere® AltraMax®
Supermicro MegaDC
Mt. Hamilton ARS-210M-NR
7/7/2023
Android
container
Anbox Cloud services
Ubuntu
Ampere® AltraMax®
H.264 / VP8
WebRTC
NVIDIA GPUs
Supermicro MegaDC
Mt. Hamilton ARS-210M-NR
32
Better Faster Greener™ © 2023 Supermicro
Supermicro ARS-210M-NR
Performance of Anbox with 1P AltraMax and A16
CPU and GPU Usages with 30FPS
Streaming Containers for Various Resolutions and FPS
128Containers at 720p@30fps
Containers
140
128
FPS
128
128
128
80
80
62
60
40
31
31
59
Usage (%)
120
100
26
20
0
720p@30fps







1080p@30fps
4k@30fps
720p@60fps
1080p@60fps
100
90
80
70
60
50
40
30
20
10
0
Usage (%)
33
128Containers at 1080p@30fps
80Containers at 4k@30fps
99
45
52
51
37
24
CPU Usage
AltraMax 1P with 4x A16 GPUs
512GB of Memory
anbox-cloud-appliance 1.16.0
NVIDIA-SMI 515.86.01, Driver Version: 515.86.01, CUDA Version: 11.7
GPU: 4x Nvidia A16
Apps: Bombsquad-stress
Linux hamiltongpu 5.4.0-132-generic #148-Ubuntu SMP Mon Oct 17
16:03:31 UTC 2022 aarch64 GNU/Linux
 Using Webrtc client benchmark - anbox-cloud-tests.benchmark
7/7/2023
Confidential
20
24
24
GPU Usage
29
22
GPU Memory Usage
24
GPU Encoding
CPU and GPU Usages with 60FPS
128Containers at 720p@60fps
128Containers at 1080p@60fps
99
100
80
78
79
60
47
40
29
33
22
29
20
0
CPU Usage
GPU Usage
GPU Memory Usage
GPU Encoding
Better Faster Greener™ © 2023 Supermicro
Confidential
MegaDC Mt. Hamilton 17” Edge
ARS-210ME-FNR
• AI Enabled 5G Edge Solution
• All in One 5G
• vCU/vDU/5GC at Edge
7/7/2023
34
Better Faster Greener™ © 2023 Supermicro
vRAN Accelerator Cards and MIG GPU on Edge
Confidential
Multi-Instance-GPU (MIG)
7/7/2023
35
Better Faster Greener™ © 2023 Supermicro
5G vRAN Deployment
7/7/2023
Confidential
36
Better Faster Greener™ © 2023 Supermicro
AI Enabled 5G Edge
7/7/2023
Confidential
37
Better Faster Greener™ © 2023 Supermicro
AI Enabled 5G Edge Solution at MWC
7/7/2023
38
Confidential
Better Faster Greener™ © 2023 Supermicro
Confidential
MegaDC Mt. Hamilton 1U10NVMe
ARS-110M-NR
• Front End Server
• Object Storage
• CDN
7/7/2023
39
Better Faster Greener™ © 2023 Supermicro
Single FIO and Block Storage Performance
7/7/2023
40
Confidential
Better Faster Greener™ © 2023 Supermicro
Power Consumption
7/7/2023
Confidential
41
Better Faster Greener™ © 2023 Supermicro
Ceph Interoperability
Q64
7/7/2023
Confidential
7513
42
Better Faster Greener™ © 2023 Supermicro
Confidential
www.supermicro.com
Supermicro MegaDC Mt. Hamilton
ARS-510M-ACR12N4H
ARS-520M-NRL
ARS-210ME-FNR
ARS-210M-NR
7/7/2023
44
Confidential
Better Faster Greener™ © 2023 Supermicro
Download