Uploaded by soniquera+studylib

Open RAN E2E Performance Benchmarking 15G and 16G Dell Servers

Open RAN E2E Performance Benchmarking 15G
and 16G Dell Servers
Summary of tests performed at Dell Open Telecom Ecosystem Lab
(OTEL)
November 2023
H19844
White Paper
Abstract
This white paper uses KPI data to compare power efficiency and processing across
15G and 16G Dell servers.
Copyright
The information in this publication is provided as is. Dell Inc. makes no representations or warranties of any kind with respect
to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular
purpose.
Use, copying, and distribution of any software described in this publication requires an applicable software license.
Copyright © 2023 Dell Inc. or its subsidiaries. All Rights Reserved. Dell Technologies, Dell, and other trademarks are
trademarks of Dell Inc. or its subsidiaries. Intel, the Intel logo, the Intel Inside logo and Xeon are trademarks of Intel
Corporation in the U.S. and/or other countries. Other trademarks may be trademarks of their respective owners. Published in
the USA 11/23 White Paper H19844.
Dell Inc. believes the information in this document is accurate as of its publication date. The information is subject to change
without notice.
2
Open RAN E2E Performance Benchmarking 15G and 16G Dell Servers
Tests performed at Dell Open Telecom Ecosystem Lab (OTEL)
White Paper
Contents
Contents
Introduction ...................................................................................................................................4
Open RAN E2E performance benchmarking ...............................................................................6
Performance tests on 15G and 16G .............................................................................................9
Test-line and RAN workload configurations ...............................................................................9
Test results and KPI data............................................................................................................13
Conclusion...................................................................................................................................16
Appendix and references............................................................................................................17
We value your feedback..............................................................................................................18
Open RAN E2E Performance Benchmarking 15G and 16G Dell Servers
Tests performed at Dell Open Telecom Ecosystem Lab (OTEL)
White Paper
3
Introduction
Introduction
The telecom industry has recently experienced a significant digital transformation. This
transformation includes adoption of Open Technologies, disaggregated RAN,
virtualization, Cloud-Native Solutions, CI/CD and AI/ML, and others. The Commercial-Offthe-Shelf (COTS) hardware, the underlying hardware, is the key feature for
Communication Service Providers (CSPs) to boost operational efficiency on their total
cost of ownership (TCO) analysis. However, another key feature for many users is the
power efficiency offered by the platform. This paper describes the toolset and methods
used to objectively measure key performance indicators (KPIs). It also highlights the
significance of these measurements to determine the power efficiency and processing
gains of the platform and provide guidance on positioning telecom-specific hardware
infrastructure.
More specifically, this paper will provide an overview of the latest 16G Dell servers with
Intel 4th Generation Xeon Scalable Processors, engineered to host radio access network
(RAN) workloads across various use cases. Dell offers a differentiated managed service
of Open Telecom Ecosystem Lab (OTEL) Validation Services for the CSPs that can be
leverage to augment their testing, integration, and validation programs.
Document
purpose
This white paper describes the power consumption and baseband processing gains
based on performance benchmarking exercises to capture KPI data. From the KPI data,
we compared power efficiency and processing gains across 15G and 16G Dell servers.
This data and other operational efficiencies provide insights into the potential operating
costs over the typical lifespan of the server. The document also highlights the importance
of firmware versions of various server components and their settings to derive optimum
performance for the workload.
15G and 16G Dell Dell PowerEdge XR11 is a 15G, 1U, short-depth, ruggedized, NEBS Level 3 compliant
server that has been successfully deployed in multiple O-RAN compliant production
Servers
networks. The next generation Dell PowerEdge XR servers XR5610 and XR8000 are
providing a new infrastructure hardware foundation that allows CSPs to transition away
from traditional, purpose-built, classical baseband unit (BBU) appliances that decouple
hardware and software, to an open, virtualized, or containerized RAN that gives CSPs the
choice to create open, best-in-class solutions from the multi-vendor ecosystem.
For more information, see the Dell Technologies webpage for PowerEdge XR Rugged
Servers.
16G Dell Servers
(XR5610 and
XR8000) form
factors
The XR5610 server, like its predecessor XR11, is a short-depth ruggedized, single socket,
1U monolithic server, purpose-built for the edge and telecom workloads. Its rugged design
also accommodates military and defense deployments, retail AI, including video
monitoring, IoT device aggregation, and point-of-sale analytics.
The following features make the XR5610 suitable for edge deployments:
4
•
Form factor and deployability
•
Environment and rugged design
•
Efficient power options
Open RAN E2E Performance Benchmarking 15G and 16G Dell Servers
Tests performed at Dell Open Telecom Ecosystem Lab (OTEL)
White Paper
Introduction
Figure 1.
Dell XR5610 fixed form factor
The XR8000 is a short depth, 400 mm class chassis with options to choose from 1U or 2U
half-width hot-swappable compute sleds with up to four nodes per chassis. The XR8000
supports three-sled configurations designed for flexible deployments. These can be 4 x
1U sleds, 2 x 1U and 1 x 2U sleds or 2 x 2U sleds.
Figure 2.
Dell XR8000 chassis with 2 x 2U horizontal slots
The 1U and 2U sleds are based on Intel’s 4th Generation Xeon Scalable Processors, with
up to 32 cores, and support for both Sapphire Rapids SP and Edge Enhanced (EE) with
Intel® vRAN Boost processors. Both sled types have 8 x RDIMM slots and support for 2 x
M.2 NVMe boot devices with optional RAID 1 support, two optional 25 GbE LAN-onMotherboard (LoM) ports and eight Dry Contact Sensors though an RJ45 connector.
Figure 3.
Dell XR8610 1U sled
Figure 4.
Dell XR8620T 2U sled
Open RAN E2E Performance Benchmarking 15G and 16G Dell Servers
Tests performed at Dell Open Telecom Ecosystem Lab (OTEL)
White Paper
5
Open RAN E2E performance benchmarking
Comparison of
16G Dell servers
The 4th Gen Intel Xeon scalable processors offer the following benefits:
•
Boosts networking, storage, and compute performance while improving CPU
utilization by offloading heavy tasks to an Intel Infrastructure Processing Unit
•
Increases multi-socket bandwidth with Intel UPI 2.0 (up to 16 GT/s)
•
Configures CPU to meet specific workload needs with Intel Speed Select
Technology (Intel SST)
•
Increases shared last-level cache (LLC) up to 100 MB LLC shared across all cores
•
Strengthens the security posture with hardware-enhanced security
•
Eliminates the need for a separate RAID card with Intel Virtual RAID on the CPU
(Intel VROC)
The CPU is also available as EE with embedded workload acceleration. In this case, an
external FEC accelerator card is not necessary. The performance benchmarking is
focused on the 4th Gen Intel Xeon Scalable Processor with Intel vRAN Boost.
Figure 5.
Comparison of physical features of the XR5610 and XR8000
Open RAN E2E performance benchmarking
Overview
6
The scope of performance benchmarking is to cover Open RAN Full Stack testing, which
includes DU (L1-HighPhy + L2-Schedular) and CU (L3) along with E2E 3GPP compliant
test tools from vendors like Keysight and Viavi. In this Open RAN architecture, running the
performance tests required end-to-end calls through the simulated UEs. The typical test
tools are the UE Traffic Generator, Emulated 5G Core, and O-RAN 7.2 compliant Radio
Unit (RU).
Open RAN E2E Performance Benchmarking 15G and 16G Dell Servers
Tests performed at Dell Open Telecom Ecosystem Lab (OTEL)
White Paper
Open RAN E2E performance benchmarking
The 5G Open RAN architecture is more flexible, scalable, and efficient than previous
generations of mobile networks. It promotes cloud-based technologies, SDN, and NFV to
automate and streamline the network management including new services for real-time
network optimizations to achieve better quality and user experience. Unlike previous
generations, it is also designed to provide higher data rates, lower latency, and improved
network efficiency, all of which helps reduce the network TCO.
This white paper highlights the ongoing improvements in the next generation of platforms
that are capable of hosting Open RAN components at scale and have a noticeable
contribution in TCO reduction.
The following list provides an overview of a 5G Open RAN architecture:
•
Radio Units (RUs): RUs are the hardware components that transmit and receive
radio signals to and from endpoint devices. The radio implements the lower PHY
and can send IQ samples coming from DU over the fronthaul to the UE through RF
either wired in simulated environment or over-the-air (OTA) in real environment. RU
is usually deployed on cell towers or other elevated locations to provide wider
coverage. In 5G, RUs are more energy-efficient and support higher data rates.
•
Distributed Units (DUs): DUs are responsible for controlling and managing
multiple RUs in a given area and are typically located closer to the RUs because of
latency constraints. Normally, it consists of L1 (High PHY) and L2 (RLC, MAC). The
following list describes DU components.
o
L1 (High PHY) of DU runs in real-time mode with time slots (for
TTI/Symbol boundaries) using the HW clock.
o
Time Synchronization on DU and RU is done by Linux ptp4l service
using PTPv2 packets coming as boundary clock from the network.
o
L2 Stack (MAC and RLC) of DU runs on CPU as service and
integrated with FlexRAN™ over WLS. It communicates with CU over
F1 interface.
•
Centralized Units (CUs): CUs are responsible for managing multiple DUs and
coordinating the flow of data between the RAN and the core network. They are
usually located in a centralized data center and can be shared by multiple
operators. CUs use software-defined networking (SDN) and network function
virtualization (NFV) to provide more flexible and efficient network management. CU
does talk to 5G Core (in SA mode) using NG interfaces over the backhaul.
•
Core network: The core network is responsible for managing user authentication,
traffic routing, and other functions that are not directly related to the RAN. In 5G,
the core network is designed to be more flexible and scalable than previous
generations of cellular networks. It uses cloud-based technologies to provide more
efficient network management and support new services such as network slicing
and edge computing.
The 5G RAN architecture includes typically a 7.2 split between the RU and the DU. It
simplifies the packet transmission between DU and RU over cost effective standard
Ethernet network. It also enables more efficient processing and transmission of data
packets, resulting in improved network performance.
The 7.2 split architecture provides several benefits, including:
Open RAN E2E Performance Benchmarking 15G and 16G Dell Servers
Tests performed at Dell Open Telecom Ecosystem Lab (OTEL)
White Paper
7
Open RAN E2E performance benchmarking
OTEL
test-line HLD
•
Minimized transport bandwidth: The 7.2 split between DU and RU helps
minimize the Transport Bandwidth required for centralizing the RAN processing
functions, the CU, and DU.
•
Scalability: This architecture enables more flexibility and a better scalability in the
5G network. The DU and CU components can be independently scaled based on
the network requirements. This flexibility allows more pooled resources more
efficient, secure agile as per demand.
•
Improved efficiency: The split architecture enables more efficient use of network
resources, which can result in lower costs and better performance.
•
More flexible deployment: The split architecture enables more flexible deployment
of network infrastructure, which can be customized to meet the needs of specific
use cases.
The following diagram provides an overview of end-to-end high-level design of 5G SA
disaggregated test-line for 15G and 16G servers that were used for Open RAN performance
benchmarking.
Figure 6.
8
High-level diagram (HLD) of the test-line at Dell OTEL
Open RAN E2E Performance Benchmarking 15G and 16G Dell Servers
Tests performed at Dell Open Telecom Ecosystem Lab (OTEL)
White Paper
Performance tests on 15G and 16G
Performance tests on 15G and 16G
Measured KPIs
In the RAN environment, DU does the most of heavy lifting, running in real-time, and is
the most compute intensive. DU must deliver an optimal performance on par with
conventional BBU.
The CPU on the DU perform process the baseband signal. To make the processing more
efficient, a HW acceleration card could be used to offload the selected functions of
baseband processing.
We focused on system and CPU power consumed by DU platform (15G and 16G) while
delivering a specifically designed load on the network.
Table 1.
XR11 (15G) Dell Server KPIs
Benchmark KPI Measurement
Description
DU Hardware KPIs
Platform Key Performance Indicators
# Core allocation
Number of CPU cores assigned for L1
processing
CPU Power
Power consumed by the CPU processing L1
High PHY
DU System Power
Total System Power (DU stack) consumed
processing RAN workload
RAN load/capacity
RB utilization percentage
DU System Throughput – DL & UL
DU system level (vDU stack) throughput
L1 Throughput – DL & UL
L1 system level throughput
DU Total Bandwidth and Load
Total RAN Capacity - # Cells, #Bandwidth
Number of PDSCH/PUSCH layers
MIMO layers
BLER and SINR
Evaluate relative BLER performance (vs range
of SINR values)
Test-line and RAN workload configurations
15G and 16G HW The following bullets describe the specific 15G and 16G servers used in the test-lines:
configurations
• 15G PowerEdge XR11 3rd Gen Intel® CPU Icelake-sp Gold 6338N 32C
•
16G PowerEdge XR5610 4th Intel® CPU SPR-EE-MCC Gold6433N 32C
Table 2.
15G and 16G Dell servers HW details
HW
Components
15G Dell XR11
16G Dell XR5610
CPU
Intel® ICELAKE-SP Gold 6338N 2.20
GHz 32C 64T
Intel® SPR-EE Gold 6433 N 2.0 GHz
32C 64T
NUMA
1
1
Memory
128 GB MTps
128GB/DDR5/4400 MTps
Open RAN E2E Performance Benchmarking 15G and 16G Dell Servers
Tests performed at Dell Open Telecom Ecosystem Lab (OTEL)
White Paper
9
Test-line and RAN workload configurations
HW
Components
15G Dell XR11
16G Dell XR5610
Storage
Type A/MTFDDAK960TDT/960 GB
Type A/Dell DC NVMe PE8010 RI
U.2/960 GB
L1 Look-aside
Intel® vRAN
Accelerator ACC100 Adapter
Intel® vRAN Boost
NIC Embeded
Broadcom 4x25 G
Broadcom 4x25 G
NIC PCI
Intel E810 4x25 G XXV
Intel E810 4x25 G XXV
BIOS Version
1.6.5
1.1.3
BIOS settings
The BIOS settings table for both 15G and 16G Dell servers is provided in the Appendix and
references section.
OS settings
Operating System Version
PRETTY_NAME="Ubuntu 22.04.2 LTS"
NAME="Ubuntu"
VERSION_ID="22.04"
VERSION="22.04.2 LTS (Jammy Jellyfish)"
VERSION_CODENAME=jammy
ID=ubuntu
ID_LIKE=debian
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
UBUNTU_CODENAME=jammy
OS Realtime Kernel Version
Linux dell 5.15.0-1009-realtime #9-Ubuntu SMP PREEMPT_RT Thu Apr 21 20:33:36 UTC 2022 x86_64
x86_64 x86_64 GNU/Linux
OS boot settings
intel_iommu=on, iommu=pt, cgroup_memory=1 cgroup_enable=memory
vfio_pci.enable_sriov=1, selinux=0, enforcing=0, nmi_watchdog=0,
softlockup_panic=0
hugepagesz=1G, hugepages=60, hugepagesz=2M, hugepages=0, default_hugepagesz=1G
kthread_cpus=0,31,32,63, irqaffinity=0,31,32,63,
isolcpus=managed_irq,domain,1-30,33-62, nohz_full=1-30,33-62 rcu_nocbs=1-30,3362
intel_idle.max_cstate=0, skew_tick=, nosoftlockup skew_tick=1
10
Open RAN E2E Performance Benchmarking 15G and 16G Dell Servers
Tests performed at Dell Open Telecom Ecosystem Lab (OTEL)
White Paper
Test-line and RAN workload configurations
OS tuned-adm profile
Current active profile: realtime
OS ptp4l Service
ptp4l.service - Precision Time Protocol (PTP) service for
Loaded: loaded (/etc/systemd/system/ptp4l.service; enabled; vendor preset: enabled)
Active: active (running) since Tue 2023-08-22 13:26:07 UTC; 2 days ago
Docs: man:ptp4l
Main PID: 2111 (ptp4l)
Tasks: 1 (limit: 80047)
CPU: 2h 3min 51.854s
CGroup: /system.slice/ptp4l.service
└─2111 /usr/sbin/ptp4l -f /etc/linuxptp/ptp4l.conf -i enp81s0f0 -2 -s
Aug 25 06:28:39 dell ptp4l[2111]: [234377.642] rms
Aug 25 06:28:40 dell ptp4l[2111]: [234378.642] rms
3 max
3 max
5 freq
6 freq
+8 +/+5 +/-
5 delay
5 delay
566 +/568 +/-
2
2
OS phc2sys Service
● phc2sys.service - Synchronize system clock or PTP hardware clock (PHC)
Loaded:
Active:
Docs:
Main PID:
Tasks:
CPU:
CGroup:
loaded (/etc/systemd/system/phc2sys.service; enabled; vendor preset: enabled)
active (running) since Fri 2023-08-25 06:31:12 UTC; 18s ago
man:phc2sys
890563 (phc2sys)
1 (limit: 80047)
4ms
/system.slice/phc2sys.service
└─890563 /usr/sbin/phc2sys -s enp81s0f0 -r -n 24 -w
Aug 25 06:31:20 dell phc2sys[890563]: [234538.839] CLOCK_REALTIME phc offset
Aug 25 06:31:21 dell phc2sys[890563]: [234539.839] CLOCK_REALTIME phc offset
-12 s2 freq
11 s2 freq
-8794 delay
-8774 delay
533
518
OS timedatectl status
Local time: Fri 2023-08-25
Universal time:
RTC time:
Time zone:
System clock synchronized:
NTP service:
RTC in local TZ:
RAN settings
Table 3.
06:31:15 UTC
Fri 2023-08-25 06:31:15 UTC
Fri 2023-08-25 06:31:15
Etc/UTC (UTC, +0000)
yes
inactive
no
RAN workload subcomponents
Sr No
RAN Workload SW
Component
Version
1
Intel FlexRAN
23.03
2
DPDK
22.11
3
Radisys BareMetal CU & DU
4.0.3
4
GCC Compiler
11.4.0
5
Intel E810 ice driver
1.9.11 FW 4.00 0x800118ae
21.5.9
Open RAN E2E Performance Benchmarking 15G and 16G Dell Servers
Tests performed at Dell Open Telecom Ecosystem Lab (OTEL)
White Paper
11
Test-line and RAN workload configurations
Table 4.
Cell ID
Cells configurations
Cell
Name
Abs
MIMO Duplex
Carrier
Band CC#
Frq Pt A
Layers Type
(kHz)
TDD
Format
1
Cell 4x4
4x1
TDD
n48
1
3509040
3516960
100
30
DDDSU(12:2:0)
2
Cell 4x4
4x1
TDD
n48
1
3509040
3516960
100
30
DDDSU(12:2:0)
3
Cell 4x4
4x1
TDD
n48
1
3509040
3516960
100
30
DDDSU(12:2:0)
4
Cell 4x4
4x1
TDD
n48
1
3509040
3516960
100
30
DDDSU(12:2:0)
Cell
DL DL MCS UL
Cell Name
ID
QAM Index QAM
UL
MCS
Index
DL
UL
IQ
Packet Packet
Compression
Size
Size
1
Cell 4x4
256
27
64
27
1024
1024
BFP 9 bits
BFP 9 bits
2
Cell 4x4
256
27
64
27
1024
1024
BFP 9 bits
BFP 9 bits
3
Cell 4x4
256
27
64
27
1024
1024
BFP 9 bits
BFP 9 bits
4
Cell 4x4
256
27
64
27
1024
1024
BFP 9 bits
BFP 9 bits
Table 5.
RU
ID
PRACH
Compression
4xRUs configurations
RUAntenna Duplex
Radio Type
Band
Name
Ports
Type
Cyclic
Prefix
MIMO
Type
FH Link
Speed
Sync
Mode
PTP
Profile
1
RU-01
Emulated
4T4R
TDD
n48
Normal
SU-MIMO
25Gbps
LLS-C3
G8275.1
2
RU-02
Emulated
4T4R
TDD
n48
Normal
SU-MIMO
25Gbps
LLS-C3
G8275.1
3
RU-03
Emulated
4T4R
TDD
n48
Normal
SU-MIMO
25Gbps
LLS-C3
G8275.1
4
RU-04
Emulated
4T4R
TDD
n48
Normal
SU-MIMO
25Gbps
LLS-C3
G8275.1
Table 6.
UEs configurations
UE
UE-Name
#
UE
Type
Antenna
Elements
Tx/Rx
Connection
Band
Type
MIMO
Layers
RF
Transport Traffic
Conditions Packets Duration
1
Single-UE
Emulated 4T4R
Wired
n48
DL 4 UL:1 Excellent
UDP
5 Minutes
2
16-UEs
Emulated 4T4R
Wired
n48
DL:4 UL:1 Excellent
UDP
5 Minutes
Table 7.
Load configurations
Sr #
12
Abs
Carrier
Carrier
SCS
BW
Frq SSB
(kHz)
(MHz)
(kHz)
Deployment Type
Load Type Per Cell
1
Dense Urban
100%
2
Urban
80%
Open RAN E2E Performance Benchmarking 15G and 16G Dell Servers
Tests performed at Dell Open Telecom Ecosystem Lab (OTEL)
White Paper
Test results and KPI data
Table 8.
Test-Profiles settings
Sr #
Test Profile Name
# of
Cells
# of
UEs
Cell
1
Load
Cell
2
Load
Cell
3
Load
Cell
4
Load
1
UDP_16UEs_1Cell_100%_PRB_Utilization
1
16
100
2
UDP_16UEs_1Cell_80%_PRB_Utilization
1
16
80
3
UDP_32UEs_2Cells_100%_PRB_Utilization
2
32
100
100
4
UDP_32UEs_2Cells_80%_PRB_Utilization
2
32
80
80
5
UDP_48UEs_3Cells_100%_PRB_Utilization
3
48
100
100
100
6
UDP_48UEs_3Cells_80%_PRB_Utilization
3
48
80
80
80
7
UDP_64UEs_4Cells_100%_PRB_Utilization
4
64
100
100
100
100
8
UDP_64UEs_4Cells_80%_PRB_Utilization
4
64
80
80
80
80
Test results and KPI data
15G and 16G HW Testing performed on the XR5610 server with SP-EE-MCC CPUs was not a GA release
from Intel. These tests are were conducted on OT samples of CPU.
KPIs data
XR11 KPIs
We collected the KPIs while testing 1-4 cells with 100 or 80 percent of UEs Load (PRB
Utilization) using 16 UEs per cell. The UEs traffic has been loaded in both Downlink and
Uplink directions. The KPIs for 15G and 16G servers are captured in the following tables:
Table 9.
XR11 (15G) Dell Server KPIs
XR11 (ICELAKE-SP)
1 Cell - 16 UEs
2 Cells - 32 UEs
3 Cells - 48 UEs
4 Cells - 64 UEs
PRB Utilization DL/UL
100%
80%
100%
80%
100%
80%
100%
80%
DL Throughput (Mbps)
1559.53
1112.43
3178.07
2698.08
4827.74
3997.15
6424
5240
UL Throughput (Mbps)
69.66
55.62
138.68
116.26
207.5
173.19
275.9
228.92
System Power IDLE (Watts)
182
182
183
183
184
184
194
194
System Power RAN (L2+L1) Stack No
UE Traffic (Watts)
211
211
214
214
224
224
228
228
System Power with UE Traffic (Watts)
218
217
225
223
247
246
259
254
CPU Power IDLE (Watts)
67
67
67
67
67
67
78.6
78.6
CPU Power RAN Stack No Traffic (Watts)
97
97
98
98
110
110
110.7
110.7
CPU Power with UE Traffic (Watts)
102
101
107
105
124
122
125.5
124.7
FAN Speed at Load (PWM)
18.00%
18.00%
18.00%
18.00%
22.50%
21.17%
26.83%
27.33%
DL BLER (%)
0
0
0
0
0
0
0
0
UL BLER (%)
0
0
0
0
0
0
0.36
0.42
Radisys 4.0.3 KPIs (DU)
Open RAN E2E Performance Benchmarking 15G and 16G Dell Servers
Tests performed at Dell Open Telecom Ecosystem Lab (OTEL)
White Paper
13
Test results and KPI data
XR5610 KPIs
We collected the KPIs while testing 1-4 Cells with 100 percent or 80 percent of UEs Load
(PRB Utilization) using 16 UEs per cell. The UEs traffic has been loaded in both Uplink
and Downlink directions.
Table 10.
XR5610 (16G) Dell Server KPIs
XR5610 (SPREE-MCC):
1 Cell - 16 UEs
Radisys 4.0.3
KPIs (DU)
KPI analysis
14
2 Cells - 32 UEs
3 Cells - 48 UEs
4 Cells - 64 UEs
PRB Utilization
DL/UL
100%
80%
100%
80%
100%
80%
100%
80%
DL Throughput
(Mbps)
1635.18
1378.1
3276.27
2812.05
4823.16
3962.53
6428.8
5237.46
UL Throughput
(Mbps)
70.48
59.8
137.2
115.46
207.22
173
276.91
228.76
System Power
IDLE (Watts)
136
136
136
135
137
137
143
143
System Power
RAN (L2+L1)
Stack No UE
Traffic (Watts)
153
153
154
154
165
165
170
170
System Power
with UE Traffic
(Watts)
157
156
162
159
182
176
185
183
CPU Power IDLE
(Watts)
63
63
63
63
64
64
71.1
71.1
CPU Power RAN
Stack No Traffic
(Watts)
79
79
81
81
94
94
95.7
95.7
CPU Power with
UE Traffic (Watts)
82
82
85
85
98
97
106.2
105.1
FAN Speed at
Load (PWM)
14.00%
14.00%
14.00%
14.00%
14.00%
14.00%
14.00%
14.00%
DL BLER (%)
0
0
0
0
0
0
0
0
The Open RAN E2E KPI analysis for 15G and 16G demonstrated several levels of
improvements in 16G compared to 15G, as shown in the following figures.
Open RAN E2E Performance Benchmarking 15G and 16G Dell Servers
Tests performed at Dell Open Telecom Ecosystem Lab (OTEL)
White Paper
Test results and KPI data
Power consumption trend as RAN load growth
Figure 7.
15G vs 16G system power consumption trend with UE traffic
Improvements in system power consumption
We observed approximately a 28 percent improvement in system power consumption for
the same UE traffic load in 15G and 16G Dell servers.
Figure 8.
15G vs 16G performance gain in system power consumption during processing
5.24 Gbps of user traffic
Open RAN E2E Performance Benchmarking 15G and 16G Dell Servers
Tests performed at Dell Open Telecom Ecosystem Lab (OTEL)
White Paper
15
Conclusion
Improvements in processing capacity
We also observed that 16G servers yielded twice the processing capacity compared to
15G for the same CPU power consumption.
Figure 9.
15G vs 16G Processing capacity gain during processing 5.24 Gbps of user traffic
at the same CPU power consumption
Conclusion
Open RAN E2E based performance benchmarking provides insights into the power
efficiency and processing capabilities when conducting look-aside acceleration on the
selected functions for a specified user load.
We conclude the following analysis based on the measured KPIs that the 16G platform
improves cost of power consumption to deliver the same UEs throughput and scale up to
the higher processing capability.
• System Idle State (Without RAN Stack being up): XR5610-SPR-EE (16G) uses
26% (51w) less power than XR11 (15G).
•
RAN Stack Running (With No UEs Attached and Traffic): XR5610-SPR-EE (16G)
uses 25.4% (58w) less power than XR11 (15G).
•
RAN Stack Running (With 4xCells – 6.4Gbps Traffic by 64 UEs): XR5610-SPR-EE
(16G) uses 28% (74w) less power than XR11 (15G).
Future testing will include assessing the operational efficiencies across other Telecom
layers.
Dell offers a differentiated managed service of OTEL Validation Services to offload the
CSPs from investing in expensive toolsets and staff skillsets for such evaluations. Rather,
CSPs can leverage the OTEL offering to conduct testing, integration, and validation
activities, such as qualifying network function workloads on the latest generation servers.
For more information about this service, contact your Dell Technologies representative.
16
Open RAN E2E Performance Benchmarking 15G and 16G Dell Servers
Tests performed at Dell Open Telecom Ecosystem Lab (OTEL)
White Paper
Appendix and references
Appendix and references
Data sources
Firmware
capability matrix
The following list provides references to the data sources used to collect the information
for the above validation.
•
Intel FlexRAN 23.03 MLOG and Console outputs
•
Ubuntu 22.04 Kernel Commands
•
Radisys 4.0.3 DU and CU stats
•
Load RAN Driver Stats
Following table highlights the typical firmware compatibility matrix as an example for
hosting the RAN workloads.
Table 11.
Firmware capability matrix
Components
RAN Vendor-A
iDRAC (XR11/R750)
5.10.30.00
XR11 BIOS
1.6.5
XR5610 BIOS
1.1.3
BootMode
UEFI
ICE Driver
NVM 3.2
BIOS settings
The following BIOS settings are applied to the 15G XR11 and 16G XR5610servers.
Table 12.
15G and 16G Dell Servers BIOS Settings
BIOS Parameters
Dell XR11
Dell XR5610
Logical Processor
Enabled
Enabled
Virtualization Technology
Enabled
Enabled
AVX P1
Level2
Level2
HW Prefetcher
Enabled
Enabled
DCU IP Prefetcher
Enabled
Enabled
DCU Streamer Prefetcher
Enabled
Enabled
LLC Prefetch
Enabled
Disabled
Adjacent Cache Line Prefetch
Enabled
Enabled
XPT Prefetch
Enabled
Enabled
X2APIC Mode
Enabled
Enabled
AVX ICCP Pre-Grant License
Enabled
Enabled
AVX ICCP Pre-Grant Level
512 Heavy
512 Heavy
Processor Settings
Open RAN E2E Performance Benchmarking 15G and 16G Dell Servers
Tests performed at Dell Open Telecom Ecosystem Lab (OTEL)
White Paper
17
We value your feedback
BIOS Parameters
Dell XR11
Dell XR5610
Processor Core Speed
1.5 GHz
1.4 GHz
Processor L3 Cache
48 MB
60 MB
SATA Devices
AHCI Mode
AHCI Mode
SR-IOV Global Enable
Enabled
Enabled
Memmory Mapped I/O Above 4 GB
Enabled
N/A
AC Power Recovery
Last
Last
AC Power Recovery Delay
Immediate
Immediate
Workload Profile
TelcoOptimizedProfile
TelcoOptimizedProfile
Turbo Boost
Enabled
Enabled
CPU Power Management
OS DBPM
OS DBPM
C1E
Disabled
Disabled
C States
Enabled
Enabled
Memory Patrol Scrub
Disabled
Disabled
CPU Interconnect Bus Link Power Mgmt
Disabled
Enabled
Monitor/Mwait
Enabled
Enabled
Energy Effiecient Policy
Performance
Performance
PCIe ASPM L1 Link Power Mgmt
Disabled
Disabled
GPSS Timer
0us
0us
System Profile
Custom
Custom
Uncore Frequency
MaxUFS
MaxUFS
Workload Configuration
IO Sensitive
IO Sensitive
Integrated Devices
System Security
BIOS.SysProfileSettings
We value your feedback
Dell Technologies and the authors of this document welcome your feedback on the
solution and the solution documentation. Contact the Dell Technologies Solutions team by
email.
Authors: Neeraj Sharma, Nikunj Vaidya
Contributors: Deepak Ladwa, Vishal Mahajan, Goutham Vutharkar, Vedanth Pullagurla,
Suresh Raam, Joe Markey
Reviewers: Abdul Thakkadi, David Haddad, Ryan Mcmeniman, Jonathan Sprague
18
Open RAN E2E Performance Benchmarking 15G and 16G Dell Servers
Tests performed at Dell Open Telecom Ecosystem Lab (OTEL)
White Paper
We value your feedback
Dell Technologies documentation
The following Dell Technologies documentation provides additional and relevant
information. Access to these documents depends on your login credentials. If you do not
have access to a document, contact your Dell Technologies representative.
•
PowerEdge XR11 Rack Server
Intel documentation
The following links provide the 4th Gen Intel Xeon Scalable Processors Product Brief
•
4th Gen Intel® Xeon® Scalable Processors
Workload documentation
The following documents are used to install Radisys workload, Intel FlexRAN and DPDK
packages, O-RAN Alliance, Keysight documents, Wireshark, and more.
•
Radysis Virtual and Open RAN
•
Intel FlexRAN Reference Architecture Intel
•
DPDK documentation
•
O-RAN Alliance O-RAN Architecture Overview
•
Keysight 5G Solutions
Open RAN E2E Performance Benchmarking 15G and 16G Dell Servers
Tests performed at Dell Open Telecom Ecosystem Lab (OTEL)
White Paper
19