Uploaded by djordje.vulovic

vThunder on KVM Installation Guide

advertisement
Installing vThunder on KVM
February, 2025
© 2025 A10 Networks, Inc. All rights reserved.
Information in this document is subject to change without notice.
PATENT PROTECTION
A10 Networks, Inc. products are protected by patents in the U.S. and elsewhere. The following website is provided
to satisfy the virtual patent marking provisions of various jurisdictions including the virtual patent marking
provisions of the America Invents Act. A10 Networks, Inc. products, including all Thunder Series products, are
protected by one or more of U.S. patents and patents pending listed at:
a10-virtual-patent-marking.
TRADEMARKS
A10 Networks, Inc. trademarks are listed at: a10-trademarks
DISCLAIMER
This document does not create any express or implied warranty about A10 Networks, Inc. or about its products or
services, including but not limited to fitness for a particular use and non-infringement. A10 Networks, Inc. has made
reasonable efforts to verify that the information contained herein is accurate, but A10 Networks, Inc. assumes no
responsibility for its use. All information is provided "as-is." The product specifications and features described in
this publication are based on the latest information available; however, specifications are subject to change without
notice, and certain features may not be available upon initial product release. Contact A10 Networks, Inc. for
current information regarding its products or services. A10 Networks, Inc. products and services are subject to A10
Networks, Inc. standard terms and conditions.
ENVIRONMENTAL CONSIDERATIONS
Some electronic components may possibly contain dangerous substances. For information on specific component
types, please contact the manufacturer of that component. Always consult local authorities for regulations
regarding proper disposal of electronic components in your area.
FURTHER INFORMATION
For additional information about A10 products, terms and conditions of delivery, and pricing, contact your nearest
A10 Networks, Inc. location, which can be found by visiting www.a10networks.com.
Table of Contents
Introduction
5
Installation Overview
6
Minimum System Requirements
8
Best Practices to Optimize vThunder VM Performance
11
Understand VM Hardware
11
Understand Network Performance
13
Understand Virtual Machine Sizing
14
Factors for Sizing vThunder VM
14
Selection of Network Adapter Type
14
vThunder Sizing Guidelines
16
Understand VM Optimization Strategies
19
CPU Pinning
19
Isolating Pinned CPUs
23
Huge memory pages
24
NUMA memory pages
25
Other Recommendations
25
License Information
29
Interfaces
30
NIC Feature Matrix
32
Expand Virtual Hard Disk Size
34
Limitations
35
Install vThunder on KVM
37
Step 1. Download the vThunder Image
37
Step 2. Create a vThunder VM
38
Step 3. Add a Bridge for Management Port
41
Step 4. Assign Management Bridge to Management Port
43
3
Installing vThunder on KVM
Contents
Step 5. Add a Bridge for Data Port
44
Step 6. Configure the Data Interface
45
Configure Virtio Data Interface
46
Configure SR-IOV Data Interface
48
Create Virtual Functions for SR-IOV
50
Assign SR-IOV Virtual Function to vThunder
50
Assign PCI-PT to vThunder
51
Step 7. vThunder Installation
53
Step 8. Verify Installation
53
Step 9. Configure vThunder on KVM
54
Additional Resources – Where to go from here?
4
55
Introduction
vThunder for Kernel-based Virtual Machine (KVM) supports the ACOS Series
application service capabilities. These capabilities include Application Delivery
Controller (ADC), Carrier-Grade Networking (CGN), Convergent Firewall (CFW),
Secure Sockets Layer Insight (SSLi), Server Load Balancer (SLB), and Threat
Protection System (TPS).
When deployed on KVM, the maximum throughput of vThunder depends on the
specific vThunder software license purchased and the virtual machine (VM)
configuration. A10 Networks distributes vThunder software in two formats: an .iso
file and a virtual disk image in qcow format (QEMU Copy-On-Write version 2). To
deploy vThunder, install it on a hardware platform that runs KVM.
Figure 1 depicts a vThunder operating on a commodity server utilizing the KVM
hypervisor.
Figure 1 : ACOS on a commodity server
5
Feedback
Installing vThunder on KVM
Introduction
Installation Overview
This document guides you through the installation process of vThunder on KVM
using VirtIO, SR-IOV (Single Root I/O Virtualization), and PCI-PT (PCI Passthrough).
The following steps provide a high-level overview of vThunder installation on KVM:
1. Minimum System Requirements
Ensure that your KVM environment meets the necessary hardware and software
prerequisites. These requirements include CPU, memory, storage, and network
specifications. For optimal performance and intended behavior of the network
adapters, use the A10 Networks' recommended and validated firmware and driver
versions. See vThunder/KVM Hardware and Software Compatibility Matrix .
2. Best Practices to Optimize vThunder VM Performance
Explore some of the best practices to optimize vThunder VM performance:
a. Understand VM Hardware
Familiarize yourself with the hardware components of the virtual machine.
b. Understand Network Performance
Gain insights into network performance considerations for optimal operation.
c. Understand Virtual Machine Sizing
Determine the appropriate sizing for the virtual machine based deployment
type.
d. Understand VM Optimization Strategies
Optimize the virtual machine (VM) to achieve optimal performance using the
following optimization strategies:
l
l
CPU Pinning: Assign specific CPU cores to the vThunder VM. This improves
consistency and reduces latency.
Isolating Pinned CPUs: Ensure that other VMs do not share the same
pinned CPUs.
6
Feedback
Installing vThunder on KVM
Introduction
l
l
l
Huge memory pages: Enable huge memory pages for better memory
performance.
NUMA memory pages: If your host has multiple NUMA nodes, configure
NUMA memory settings.
Other Recommendations: Follow additional best practices for KVM
virtualization.
3. Understand Virtual Machine Sizing
Select the appropriate VM size according to your vThunder use case.
4. License Information
Explore different types of licensing model and licenses for vThunder.
5. Interfaces
Configure the network interfaces for vThunder.
6. Expand Virtual Hard Disk Size
Consider expanding the virtual hard disk size if needed.
7. Limitations
Take note of any constraints or limitations associated with vThunder when
deploying it on KVM.
8. Install vThunder on KVM using the .iso or .qcow image:
Step 1. Download the vThunder Image
Step 2. Create a vThunder VM
Step 3. Add a Bridge for Management Port
Step 4. Assign Management Bridge to Management Port
Step 5. Add a Bridge for Data Port
Step 6. Configure the Data Interface
Step 7. vThunder Installation
Step 8. Verify Installation
Step 9. Configure vThunder on KVM
7
Minimum System Requirements
To configure ACOS Series application service capabilities, ensure that the KVM host,
where vThunder VM is created, meets the specified system requirements. For more
information on supported network adapters, Ethernet ports, and NIC card models
with their respective ACOS versions, refer to the vThunder/KVM Hardware and
Software Compatibility Matrix.
l
Table 1 provides the minimum system requirements for installing or upgrading
ACOS 6.x.
Table 1 : Minimum System Requirements
ACOS 6.x
o
Intel with VT-d enabled
NOTE: AMD is not supported.
o
6 vCPUs
o
16 GB of memory
o
128 GB of free disk space
o
2 Ethernet ports (1 management interface and 1 data interface)
NOTE: For vCGN, vTPS, and vADC network applications involving control
packets or control plane protocols, data interface must be either SRIOV or PCI-PT. The 'virtio’ network interfaces are not supported as
data interfaces for vCGN and vTPS applications.
l
For vThunder, it is recommended to use the following vCPUs, memory, and
storage based on your requirements:
Table 2 : Recommended vThunder VM Template
Template
vCPUs
Memory (GB)
Storage (GB)
Small
6 or 8
16
128
Medium
16
32
256
8
Feedback
Installing vThunder on KVM
Minimum System Requirements
Table 2 : Recommended vThunder VM Template
Template
vCPUs
Memory (GB)
Storage (GB)
Large
32
64
384
NOTE:
l
l
l
l
l
l
l
- ACOS supports a maximum of 2 NUMA nodes.
- The NVIDIA Mellanox ConnectX-6 100 variant is only supported with
100 GbE and with the Quad Small Form-Factor Pluggable 28 (QSFP28)
connector.
- For vCFW and vTPS applications involving high-volume traffic, Intel
E810 does not support Packet Prioritization.
KVM vNIC, such as ‘virtio’ interface is primarily supported as management
interface and have very limited data interface support. For low-speed ADC use
cases, 1 Gbps or lower speed is recommended.
When enabling jumbo packets, it is recommended to use the Medium or Large size
VM template. This recommendation is based on the higher amount of I/O buffer
required, which depends on the total available memory.
Enable VT-d Virtualization Technology in the system BIOS.
vTPS execution consumes a high amount of CPU resources, so it is recommended
to use 8 or more CPUs.
CPU isolation and pinning are required for optimal performance. This enables the
VM to receive guaranteed CPU time from the assigned CPUs.
For better performance, SSD (Solid State Drive) is recommended over HDD (Hard
Disk Drive) for disk storage.
With KVM SR-IOV interfaces, it is required to enable trust mode and disable
spoofing for all virtual functions (VFs).
o
To enable the trust mode, run the following command on the KVM host:
$ ip link set dev <interface_name> vf <vf_id> trust on
o
To set the spoof off, run the following command on the KVM host:
$ ip link set dev <interface_name> vf <vf_id> spoof off
9
Feedback
Installing vThunder on KVM
Minimum System Requirements
NOTE:
l
To enable the KVM SR-IOV support on Mellanox ConnectX-5 and
ConnectX-6 network adapters, see the official NIVIDIA Mellanox
documentation website.
The default polling mode is recommended for high performance and low latency.
CAUTION:
Shared polling mode will be deprecated by the end of 2024. For futureproofing, customers using this mode on versions 5.x or 6.x are
recommended to switch to default poll mode or continue to remain in
the default poll mode.
In certain scenarios, CPU cycles are consumed regardless of the polling
mode configuration set on vThunder when using shared polling mode
with newer Linux-based KVM versions. Consequently, vThunder reports
high host CPU usage.
10
Best Practices to Optimize vThunder VM
Performance
Virtual machines (VMs) are designed and configured by default to share host
resources. However, this default configuration may not be idealistic for networking
applications that require determinism while running on a VM. In a default VM
configuration, some degree of performance deterioration may be experienced.
The following sections provide guidelines and instructions to maximize vThunder
performance when running as a guest VM.
The following topics are covered:
Understand VM Hardware
11
Understand Network Performance
13
Understand Virtual Machine Sizing
14
Factors for Sizing vThunder VM
14
Selection of Network Adapter Type
14
vThunder Sizing Guidelines
16
Understand VM Optimization Strategies
19
CPU Pinning
19
Isolating Pinned CPUs
23
Huge memory pages
24
NUMA memory pages
25
Other Recommendations
25
Understand VM Hardware
Before optimizing the performance of a virtual machine, it is crucial to thoroughly
understand the hardware on which the virtual machine will be deployed. This
understanding is essential for achieving optimal performance enhancements.
11
Feedback
Installing vThunder on KVM
Best Practices to Optimize vThunder VM Performance
Several tools are available for examining hardware specifications, but lstopo stands
out as particularly valuable for providing a comprehensive architectural overview of
the underlying hardware.
To get a graphical output that reveals key insights into the hardware architecture,
execute the following command on the KVM host:
$ lstopo out.png
The following graphical output is obtained from running the below lstopo command
on Dell R760:
Figure 2 : Graphical output
The graphical output provides the following insights:
l
l
l
l
CPU Topology: Displays the arrangement of CPU sockets, cores, and threads.
The information indicates the number of physical CPUs, the cores per socket, and
the presence of hyper-threading (multiple threads per core).
Cache Hierarchy: Displays the hierarchy of CPU caches, including L1, L2, and L3
caches.
The information indicates the size and organization of each cache level and how
they are shared among CPU cores or sockets.
NUMA Nodes: Displays the grouping of CPUs and memory into NUMA nodes.
Interconnects: Represents the communication paths between different parts of the
system.
This information is important for assessing system bandwidth and latency.
12
Feedback
Installing vThunder on KVM
Best Practices to Optimize vThunder VM Performance
l
l
PCI Devices: Displays PCI devices and connectivity.
Memory Layout: Indicates the memory layout of the system by illustrating NUMA
nodes and the associated memory regions.
This information is useful for memory optimization and understanding memory
access patterns.
In addition to lstopo, consider using other tools such as lscpu, lshw, and dmidecode
to gain a holistic understanding of the hardware environment. Administrators can
then make informed decisions regarding virtual machine deployment and
optimization strategies.
Understand Network Performance
There are number of factors having the severity of virtualization's impact on VM
performance:
l
The number of concurrently running VMs.
l
The number of virtual devices used by each VM.
l
The types of devices used by the VMs.
Other specific reasons for VM performance issues are:
l
l
l
l
l
Virtual CPUs (vCPUs) are implemented as threads on the host and are managed by
the Linux scheduler.
VMs do not automatically inherit optimization features, such as NUMA or huge
pages from the host kernel.
Disk and network I/O settings on the host can have a significant performance
impact on the VM.
Network traffic typically reaches a VM through a software-based bridge in the case
of vNIC.
Depending on the host devices and their models, there might be significant
overhead due to the emulation of particular hardware.
13
Feedback
Installing vThunder on KVM
Best Practices to Optimize vThunder VM Performance
Understand Virtual Machine Sizing
The appropriate size for vThunder virtual machine depends on several factors. This
section outlines the primary considerations and provides guidelines for virtual
machine sizing:
l
Factors for Sizing vThunder VM
l
Selection of Network Adapter Type
l
vThunder Sizing Guidelines
Factors for Sizing vThunder VM
The process of determining the appropriate size for vThunder VM relies on three
primary factors:
vCPUs: The processing capacity is crucial for executing packet processing tasks
efficiently within the virtual environment.
Memory: It plays a significant role, serving various purposes such as packet buffers,
session tables, forwarding tables, and accommodating system memory requirements.
The scaling of these functions is constrained by the available memory resources, thus
establishing predetermined resource-limits on the resources accessible to
applications.
NIC: The choice of network interface type significantly influences the effective
throughput of the virtual machine. Different network adapter types, such as VirtIO,
SR-IOV, or PCI Passthrough (PCI-PT) offers distinct advantages and trade-offs within
the virtualized environment.
Selection of Network Adapter Type
The selection of the appropriate network adapter type is crucial for optimizing the
virtual machine's network performance. Different network adapter types offer
distinct advantages and trade-offs within the virtualized environment. Therefore, the
selection of the appropriate network adapter type is crucial in optimizing the virtual
machine's network performance. Table 3 lists the types of supported network
adapters.
14
Feedback
Installing vThunder on KVM
Best Practices to Optimize vThunder VM Performance
Table 3 : Supported Types of Network Adapter
Type
VirtIO
Description
l Allows guest VMs to communicate with the underlying hypervisor in
KVM environments.
l
l
Provides a framework for paravirtualized I/O, enabling
communication between the guest VMs and the virtualization host.
Enables virtual machines and hypervisors to communicate
effectively by serving as software constructs that utilize the
computing and memory resources of the host.
This arrangement can present challenges as resource sharing starts
to compete and interfere, and additionally, using Linux bridges and
the kernel network stack, introduces layers and involves multiple
data copies between user-space and kernel-space, packet batching
and buffering. Consequently, these things can lead to performance
issues particularly at high data transfer rates, resulting in packet
drops and very high latency.
NOTE: Although alternatives like OVS-PDK with zero copy can
address these issues, VirtIO interface with OVS-DPDK is notsupported1 in ACOS.
SR-IOV
l
l
l
1
Allows a single physical network adapter to be shared among
multiple virtual machines (VMs) with minimal performance
overhead.
Supports Virtual Functions (VFs)
The physical network adapter is divided into multiple virtual
functions (VFs), each of which behaves like a separate physical
device.
Bypasses the hypervisor for data transfer when the VFs can be
Not supported refers to a scenario or use case that falls outside the
recognized or validated configurations, features, or functionalities of the
product. A10 Networks does not guarantee the performance, compatibility, or
functionality of the product in such scenarios. If you encounter any challenges
while attempting an unsupported use case, official support from A10 may not
be available.
15
Feedback
Installing vThunder on KVM
Best Practices to Optimize vThunder VM Performance
Table 3 : Supported Types of Network Adapter
Type
Description
assigned directly to individual VMs.
l
l
l
l
PCI-PT
l
l
l
Improves network performance and reduces CPU overhead
compared to traditional software-based virtualized networking
approaches.
Allows high-speed data transfer with low latency and minimal CPU
usage, crucial for network-intensive applications like vCGN and
vTPS.
Allows a single physical network adapter to be shared among
multiple virtual machines on the same host while maintaining nearnative network performance.
Offloads network processing tasks directly to the physical NIC,
thereby reducing CPU overhead and latency.
Allows virtual machines to directly access and manage physical PCI
devices installed on the host system.
Provides a way to dedicate network interface cards bypassing the
hypervisor's virtualization layer.
Allows high-speed data transfer with low latency and minimal CPU
usage, crucial for network-intensive applications like vCGN and
vTPS.
vThunder Sizing Guidelines
Table 4 provides recommended sizing guidelines for different deployment types
based on network adapters, vCPUs, memory, throughput, and optimization
strategies.
Table 4 : vThunder Sizing Guidelines
Deploym
ent Types
Netwo
rk
Adapt
ers
vCP
U
Memo
ry
(GB)
Through
put
VM
Optimizat
ion
Laten
cy
Control
Packet
Prioritizat
ion
vADC
VirtIO
6
16
Low
CPU
High
Not
16
Feedback
Installing vThunder on KVM
Best Practices to Optimize vThunder VM Performance
Table 4 : vThunder Sizing Guidelines
Deploym
ent Types
Netwo
rk
Adapt
ers
vCP
U
Memo
ry
(GB)
Low
performa
nce
vADC
Mid-High
performa
nce
SR-IOV
PCI-PT
16 /
32
32 /
128
Through
put
VM
Optimizat
ion
(Less
than
1Gbps)
Pinning
5Gbps
and
above
CPU
Pinning
Laten
cy
Control
Packet
Prioritizat
ion
supported
Isolating
Pinned
CPUs
Low
Supported
Low
Supported
Low
Supported
Isolating
Pinned
CPUs
Huge
memory
pages
NUMA
memory
pages
vCGN
SR-IOV
PCI-PT
16 /
32
32 /
128
High
256G
B*
CPU
Pinning
Isolating
Pinned
CPUs
Huge
memory
pages
NUMA
memory
pages
vTPS
SR-IOV
16 /
32 /
High
17
CPU
Feedback
Installing vThunder on KVM
Best Practices to Optimize vThunder VM Performance
Table 4 : vThunder Sizing Guidelines
Deploym
ent Types
Netwo
rk
Adapt
ers
vCP
U
Memo
ry
(GB)
PCI-PT
32
128
Through
put
VM
Optimizat
ion
Laten
cy
Control
Packet
Prioritizat
ion
Pinning
Isolating
Pinned
CPUs
Huge
memory
pages
NUMA
memory
pages
Additional Notes
l
l
l
l
Virtualization addresses the issue of underutilized resources and offers a flexible
approach to resource sharing across multiple applications. However, resource
sharing can result in contention, leading to various issues arising from CPU and
memory oversubscription. This can cause network applications to become
unstable, unreliable and have packet drops.
CPU pinning and CPU isolation on the host guarantee the assigned CPU cycles for
the VM. A10 highly recommends utilizing CPU pinning and isolation to minimize
the impact of CPU steal time (the noisy neighbour problem). Similarly,
Hugepages-backed VMs and NUMA-awareness are necessary for optimal
performance at very high data rate.
ACOS has internal health checks, such as "DATA or IO CPU heartbeats", to
monitor the packet processing threads' health. These health check failures may
be triggered due to the noisy neighbour problem on the host. Similarly, packet
drops, such as Txdrops or inMissErr, can be attributed to suboptimal usage of
CPU and memory resources.
In the event that the system became unreliable, such as heartbeat failures,
performance not as expected, network protocol flaps, packet drops either on the
18
Feedback
Installing vThunder on KVM
Best Practices to Optimize vThunder VM Performance
ingress or egress, or silent reboots, A10 may request that the customer adhere to
the recommended VM optimizations before further assistance can be provided on
the reported issues.
Understand VM Optimization Strategies
This topic covers the following strategies for optimizing virtual machine (VM)
performance:
l
CPU Pinning
l
Isolating Pinned CPUs
l
Huge memory pages
l
NUMA memory pages
l
Other Recommendations
CPU Pinning
By default, KVM guests execute operations using multiple threads that represent
virtual processors. These threads are managed by the Linux scheduler, which
allocates them to available CPU cores based on priority queues and niceness.
However, each time the host scheduler reschedules a virtual CPU thread onto a
different physical CPU, the benefits of local CPU cache (L1/L2/L3) are lost. This can
significantly degrade guest performance. To address this issue, CPU pinning restricts
the physical CPUs where virtual CPUs can operate. Ideally, a one-to-one mapping
aligns virtual CPU cores with physical CPU cores.
The following recommendations apply when pinning virtual CPUs to physical CPU
cores:
l
HyperThreads Sharing the Same Physical Cores: When pinning virtual CPUs, ensure
that HyperThreads (logical cores) share the same physical cores. This arrangement
also ensures that they utilize the same L1/L2/L3 cache.
19
Feedback
Installing vThunder on KVM
Best Practices to Optimize vThunder VM Performance
Figure 3 : HyperThreads Sharing the Same Physical Cores
l
l
Physical Cores and PCIe Network Device in the Same NUMA Domain: Pin virtual
CPUs to physical cores that belong to the same NUMA domain as the PCIe network
device. This helps minimize latency and improve communication between the
virtual machine and the network.
Physical Cores Sharing the Same L3 Cache (CCD: Core-Complex-Die): Cores that
share the same L3 cache (within CCD) can enhance cache coherence. Consider
pinning virtual CPUs to such cores for optimal performance.
To get detailed information about your system’s CPU architecture and configuration,
execute the following command on the KVM host:
20
Feedback
Installing vThunder on KVM
Best Practices to Optimize vThunder VM Performance
$ lscpu -e
CPU NODE SOCKET CORE L1d:L1i:L2:L3 ONLINE
0
0
0
0
0:0:0:0
yes
1
1
1
1
64:64:64:1
yes
2
0
0
2
12:12:12:0
yes
3
1
1
3
76:76:76:1
yes
4
0
0
4
1:1:1:0
yes
5
1
1
5
65:65:65:1
yes
6
0
0
6
13:13:13:0
yes
7
1
1
7
77:77:77:1
yes
...
...
84
0
0
36
9:9:9:0
yes
85
1
1
37
73:73:73:1
yes
86
0
0
38
21:21:21:0
yes
87
1
1
39
85:85:85:1
yes
88
0
0
40
10:10:10:0
yes
89
1
1
41
74:74:74:1
yes
90
0
0
42
22:22:22:0
yes
91
1
1
43
86:86:86:1
yes
92
0
0
44
11:11:11:0
yes
93
1
1
45
75:75:75:1
yes
94
0
0
46
23:23:23:0
yes
95
1
1
47
87:87:87:1
yes
The output provides the following CPU setup information:
l
CPU: Indicates the numerical identifier for each CPU core.
l
NODE: Indicates the NUMA node to which the CPU core belongs.
l
SOCKET: Specifies the physical CPU socket.
l
CORE: Indicates the core number within the socket.
l
L1d:L1i:L2:L3: These values represent the cache hierarchy:
o
L1d: Level 1 data cache.
o
L1i: Level 1 instruction cache.
21
Feedback
Installing vThunder on KVM
Best Practices to Optimize vThunder VM Performance
l
o
L2: Level 2 cache.
o
L3: Level 3 cache.
ONLINE: Indicates whether the CPU core is online (available for use).
The following are the observations in the lscpu output:
l
Even-Numbered vs. Odd-Numbered Cores:
The even-numbered cores (e.g., cores 0, 2, 4, …) and the odd-numbered cores (e.g.,
cores 1, 3, 5, …) are associated with different NUMA nodes but share the same CCX
(Core Complex). The CCX represents a group of cores within a processor package
that share certain resources, such as L3 cache and memory access pathways.
l
CCD (Core Complex Die) Grouping:
Cores 0, 4, 8, …, 84 belong to the same CCD (Core Complex Die) within the CCX.
Cores 2, 6, 10, …, 86 belong to another CCD within the same CCX.
Although both CCX and CCD belong to the same NUMA architecture, they may
exhibit differences in memory access patterns and input/output (IO) device access
characteristics.
l
Impact on System Performance:
The differentiation in CCX and CCD grouping can also impact system performance.
It is recommended to understand these distinctions for more effective
optimization strategies, to maximize system performance, and resource utilization.
Since all cores are connected to the same L3 in this example, you can ignore CCD. It
does not matter much how CPUs are pinned and isolated as long as they are proper
thread pairs from the same NUMA. For instance, (0, 48), (2, 50), etc. The following is
an example of CPU pinning the VM’s virtual CPUs.
22
Feedback
Installing vThunder on KVM
Best Practices to Optimize vThunder VM Performance
$ virsh edit vmname
...
<vcpu placement='static'>8</vcpu>
<iothreads>1</iothreads>
<cputune>
<vcpupin vcpu='0' cpuset='0'/>
<vcpupin vcpu='1' cpuset='48'/>
<vcpupin vcpu='2' cpuset='2'/>
<vcpupin vcpu='3' cpuset='50'/>
<vcpupin vcpu='4' cpuset='4'/>
<vcpupin vcpu='5' cpuset='52'/>
<vcpupin vcpu='6' cpuset='6'/>
<vcpupin vcpu='7' cpuset='54'/>
</cputune>
...
<cpu mode='host-passthrough'>
<topology sockets='1' cores='4' threads='2'>
</cpu>
...
Isolating Pinned CPUs
CPU pinning alone does not inherently prevent other host processes from utilizing
the pinned CPUs. To effectively reduce lag in the guest virtual machine, it's essential
to properly isolate the pinned CPUs. This isolation ensures that only the intended
guest processes have access to these CPUs, minimizing interference from other host
processes and optimizing performance within the virtual environment.
There are many ways of isolating the CPUs on the host. The most common ways are
to use the kernel command line parameter and systemd. The following example
indicates how the systemd command is used to isolate the CPUs dynamically:
$ systemctl set-property --runtime -- user.slice AllowedCPUs=1,3,5,747,49,51,53,55-95
$ systemctl set-property --runtime -- system.slice AllowedCPUs=1,3,5,747,49,51,53,55-95
$ systemctl set-property --runtime -- init.scope AllowedCPUs=1,3,5,747,49,51,53,55-95
23
Feedback
Installing vThunder on KVM
Best Practices to Optimize vThunder VM Performance
The commands enable the host system to refrain from running processes on those
CPUs that are specifically allocated for running the guest VM.
Huge memory pages
When dealing with network applications, memory latency can become a problem due
to the large amount of memory used for packet buffers, session tables, forwarding
tables, etc. Applications often need to access information across multiple memory
"pages," which are the basic units for memory allocation. Resolving the actual
physical address of a memory page involves multiple steps, so CPUs typically cache
information from recently used memory pages to speed up future access to the same
pages.
Applications that use large amounts of memory, such as virtual machines with 4 GiB
of memory divided into 4 KiB pages (the default size for normal pages), encounter
issues with cache misses. This can significantly increase memory latency, as there are
approximately 1.04 million pages in use. Hugepages offers a solution to this problem
by allocating larger individual pages to these applications. This increases the
likelihood that multiple operations will target the same page in succession, thereby
reducing cache misses and improving overall performance.
The following example first allocates these hugepages on the host and then edits the
VM configuration to allow it to benefit from these hugepages.
$ echo num > /sys/kernel/mm/hugepages/hugepages-1048576kB/nr_hugepages
Where “num” is the number of hugepages, for which the default size is 1GB. Page will
be automatically allocated, and freed after the virtual machine stops.
$ numactl --hardware
available: 2 nodes (0-1)
::
::
node distances:
node
0
1
0:
10
21
1:
21
10
24
Feedback
Installing vThunder on KVM
Best Practices to Optimize vThunder VM Performance
NUMA memory pages
A cross-NUMA access between cores and NUMA, or between the network devices
and NUMA, is very expensive and could be one of the main factors for suboptimal
performance.
In the above numactl output, the distance between a core and the memory within
the same NUMA domain is greater than twice the distance between cores and
memory in different domains.
Bind it to the physical cores and ensure that the memory is allocated from the node
hosting the physical domain. The XML description for VM is:
$ virsh edit vmname
...
<vcpu placement='static' cpuset='0,2,4,6,48,50,52,54 '>8</vcpu>
<numatune>
<memory nodeset'0'/>
</numatune>
Other Recommendations
The following are some additional recommendations for optimization:
l
l
Most performance issues arise when the host CPUs are oversubscribed, leading to
unpredictability as resources are shared and cannot be guaranteed. It is
recommended to ensure that the total number of vCPUs assigned to different VMs
does not exceed the number of CPUs available on the system. If this cannot be
guaranteed, it is recommended to isolate CPUs for the vThunder VM and pin those
CPUs, as mentioned above. For configuring vcpupin, emulatorpin, and numatune
settings, refer to the Red Hat Performance Tuning Guide .
The vCPU model of VMs aligns with the CPU model of the host. For example, to set
the “vmname” VM to use the CPU model of the host.
$ virt-xml vmname --edit --cpu host-model
l
In the memory allocation for a VM, the Maximum allocation matches the Current
allocation. This ensures that the memory is preallocated to the VM and prevents
memory ballooning, which may cause performance degradation.
25
Feedback
Installing vThunder on KVM
Best Practices to Optimize vThunder VM Performance
l
It is necessary to understand the limitations of how a virtual network works. A
virtual network is a software construct that operates on a host machine. Virtual
networking offers advantages, such as flexibility, and cost-effectiveness. However,
there are many disadvantages to consider:
o
Performance Overhead: Virtual networking introduces additional processing
overhead compared to physical networking, which can impact network
performance, especially in high-demand environments.
o
Limited Hardware Control: Virtual networks can limit the level of control and
customization available to the guest VMs, which otherwise are available when
the guests directly manage these physical network devices.
o
Complexity: Managing virtual networks can be more complex than physical
networks and require specialized knowledge and skills to configure and
troubleshoot effectively.
o
Single Point of Failure: Virtual networks are susceptible to failures at the
hypervisor or virtualization layer, which can disrupt network connectivity for all
virtual machines hosted on the affected host.
o
Resource Sharing: Virtual networking involves sharing physical network
resources among multiple virtual machines, which can lead to contention and
performance issues during periods of high network activity.
o
Compatibility Issues: Virtual networking solutions may not always be
compatible with certain hardware or software configurations, leading to
interoperability issues and potential compatibility challenges.
Due to this dependency and the host sharing resources with various other
operations, the behaviour of virtual networking is not always predictable and
guaranteed. Even a small traffic burst can cause disruption and render services
unavailable, leading to high costs in debugging such issues.
Therefore, vNIC is not suitable or supported for use cases involving high-volume
traffic or control packets/control plane protocols. For data traffic applications
exceeding 1 Gbps or involving over 80K PPS, only SR-IOV or PT interfaces are
supported.
l
For better performance, SSDs or NVMe drives are recommended over HDDs for disk
storage. This offers significant performance improvements, especially when the
write-through disk cache option is selected.
26
Feedback
Installing vThunder on KVM
Best Practices to Optimize vThunder VM Performance
NOTE:
l
It is recommended to consider and utilize the write-back option. When
the write-back option is enabled, data is first written to a cache before
being transferred to the main storage. This allows write operations to
be completed more quickly, as the system doesn't have to wait for the
data to be written directly to the main storage.
If the guest VM needs to use network devices spanning across both NUMA nodes,
the same techniques as mentioned above can be applied. vCPU pinning, emulator
pinning, and NUMA tuning can all be configured according to their domain
specifications. ACOS, even when running on the virtual machine, is still NUMAaware and can operate according to those specifications.
$ virsh edit vmname
...
<memory unit='KiB'>134217728</memory>
<currentMemory unit='KiB'>134217728</currentMemory>
<memoryBacking>
<hugepages>
<page size='1048576' unit='KiB' nodeset='0'/>
<page size='1048576' unit='KiB' nodeset='1'/>
</hugepages>
</memoryBacking>
<vcpu placement='static'>8</vcpu>
<cputune>
<shares>45056</shares>
<vcpupin vcpu='0' cpuset='0'/>
<vcpupin vcpu='1' cpuset='48'/>
<vcpupin vcpu='2' cpuset='1'/>
<vcpupin vcpu='3' cpuset='29'/>
<vcpupin vcpu='4' cpuset='2'/>
<vcpupin vcpu='5' cpuset='50'/>
<vcpupin vcpu='6' cpuset='3'/>
<vcpupin vcpu='7' cpuset='51'/>
<emulatorpin cpuset='0'>
</cputune>
<numatune>
<memory mode='strict' nodeset='0-1'/>
<memnode cellid='0' mode='strict' nodeset='0'/>
<memnode cellid='1' mode='strict' nodeset='1'/>
27
Feedback
Installing vThunder on KVM
Best Practices to Optimize vThunder VM Performance
</numatune>
...
28
License Information
A10 Networks offers different types of licenses for your vThunder instance. To apply
a license to vThunder instance, see Global License Manager (GLM) Guide.
29
Interfaces
The vThunder for a KVM instance requires at least one data and one management
interface. However, practical deployments necessitate two data ports: one each for
the inside network and one for the outside network. This section provides the
information for creating one management interface and two data interfaces.
The management interface can run on supported virtual NICs, e1000 and VirtIO
network interface types.
NOTE:
The maximum throughput supported by the VirtIO interface as a data
interface is less than 1 Gbps. It cannot be used as a data interface in
vCGN, vTPS, or to support any high-volume traffic or use cases
involving control packets or control plane protocols.
30
Feedback
Installing vThunder on KVM
Interfaces
Figure 4 : vThunder for KVM interfaces
Figure 4 shows an example of three network interfaces:
l
Management — Dedicated management interface
l
Ethernet 1 — Data interface
l
Ethernet 2 — Data interface
The vThunder for KVM management interface is connected to a port on the KVM
host’s virtual bridge, and the vThunder for KVM data interfaces are connected to
Virtual Functions (VFs) on the virtualized NICs.
The management interface is bound to the default virtual bridge (virbr0).
vThunder also supports a management connection to the command line interface
(CLI) through the vThunder for KVM serial console. The console is required for the
31
Feedback
Installing vThunder on KVM
Interfaces
initial configuration. Access the ACOS device on the Mgmt (Management), Ethernet 1
(Eth1), and Ethernet 2 (Eth2) interfaces after configuring IP addresses on them.
Typical deployments require connecting one data interface to the inside network and
another data interface to the outside network and providing NAT functions between
them. Up to eight data interfaces can be configured. Standard topologies, such as
routed mode and one-arm mode are supported, while transparent mode and DSR
mode are not supported.
NIC Feature Matrix
The following table provides the NIC feature matrix for ACOS 6.x:
SR-IOV
Fe a t ur e
VIRTI
I nt e l
N V I DI A
I nt e l X7 1 0 /
N V I DI A
O
X7 1 0 / XXV 7 1
M e lla nox
XL 7 1 0 / XXV 7 1 0 / E 8
M e lla nox
0
Conne c t
10
Conne c t
XL 7 1 0 / E 8 1 0
X-5 / 6
No
Yes*
Yes *
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
(vN I C)
Trunk (Static and
LACP)
PCI-PT
X-5 / 6
Routing Protocols
(BGP, RIPv1,
RIPv2, OSPF, ISIS),
BFD, BPDU FWD
groups
L2 (VLAN, VE,
Tagged VLAN)
Static route, Static
IP interface, IP
NAT, IP route,
NAT range-list
Network
management
interface, IP DHCP
interface
32
Feedback
Installing vThunder on KVM
Interfaces
SR-IOV
Fe a t ur e
PCI-PT
VIRTI
I nt e l
N V I DI A
I nt e l X7 1 0 /
N V I DI A
O
X7 1 0 / XXV 7 1
M e lla nox
XL 7 1 0 / XXV 7 1 0 / E 8
M e lla nox
0
Conne c t
10
Conne c t
XL 7 1 0 / E 8 1 0
X-5 / 6
No
No
Yes
No
Yes
No
No
No
No
No
(vN I C)
X-5 / 6
Packet
Prioritization**
(vCGN/vTPS/vADC
with protocol
packets)
Port Mirror
Performance
(Recommendatio
n)
Upto
1Gbps
Upto 30Gbps
Upto
100Gbps
Upto 30Gbps
Upto
100Gbps
* For LACP/Trunk with SR-IOV interfaces, respective Physical Functions (PFs) cannot be shared across different VMs.
** In the use case of vTPS/vCGN/vADC involving control packets or control plane protocols, network adapter support for
Packet Prioritization feature is a must to ensure the uninterrupted facilitation of high-volume traffic use cases.
NOTE:
When RoCEv2 protocol is enabled on the host configured with Mellanox
Connect-X5/X6 SRIOV adapter, ACOS does not receive the UDP traffic on
port 4791. To ensure UDP packet transmission on port 4791, it is
recommended to disable the RoCEv2 protocol. The RoCEv2 protocol can
be disabled on the host using NVIDIA Firmware Tools (MFT), #mlxconfig
-y -d <pci_id> set ROCE_CONTROL=1. For more detailed usage
instructions, see NVIDIA Firmware Tools and Examples of mlxconfig
Usage .
33
Expand Virtual Hard Disk Size
You can expand the size of virtual Hard disk in a vThunder, even after the creation
of the VM. The size of the virtual disk can only be expanded but cannot be
decreased.
To expand the virtual hard disk size:
1. Before shutting down the VM, back up the vThunder VM.
2. Power OFF the VM.
3. Log in to the KVM host through CLI, and resize the qcow2 image of the
corresponding VM with the command qemu-img resize.
For example, to increase the size of the disk by 10 GB:
$ qemu-img resize /var/lib/libvirt/images/<vThunder_name>.qcow2 +10G
4. Power ON the VM. The virtual hard disk size in a vThunder is expanded, and it
gets reflected in the increase in the size of a10data.
34
Limitations
vThunder has the following limitations:
l
l
l
l
l
l
l
vThunder utilizes DPDK, a high-performance networking framework. DPDK’s poll
mode continuously checks network interfaces for incoming packets, improving
performance but using 100% CPU cycles on the host system.
The show cpu command in ACOS CLI, it shows CPU cycles used for packet
processing, not actual CPU usage. In poll mode, CPU usage is always 100% of
assigned CPUs on the host, as seen in the top command.
The shared polling mode has limitations and may not always conserve assigned
CPU cycles. On some systems, it consumes all the assigned CPU cycles, similar to
the default poll mode. Unfortunately, there are no plans to fix this issue in
existing or upcoming ACOS releases.
Interrupt mode and shared poll mode lack determinism in latency and
performance. Newer systems, especially those supporting AI workloads, require
predictable network behavior.
ACOS does not support installating the Mellanox card firmware. To install the
firmware, refer to one of the following sites:
o
Mellanox Documentation
o
Github Mellanox Documentation
vThunder experiences higher CPU usage periodically due to fetching statistical
data from the host physical function driver when using SRIOV or VF interfaces
(such as Intel 710 SRIOV interfaces).
If DPDK and SR-IOV are enabled on the Intel 82599/X710 adapter, additional steps
are required to ensure proper reception of multi-cast packets. To receive the
multi-cast packets properly, perform the following steps:
1. Shut down the VM.
2. Enable the trust mode on the host for all Virtual Functions (VFs) expected to
receive multi-cast traffic.
35
Feedback
Installing vThunder on KVM
Limitations
$ ip link set dev <iface name> vf <vf-id> trust on
3. Run the iproute command on all the physical ports which are supposed to
receive multi-cast packets on VE interface.
$ ip link set dev eth2 allmulticast on
4. Power on the VM. Now it is ready to receive the multi-cast packets.
NOTE:
For vThunder in KVM-SRIOV mode with the 82599/X710 adapters
unicast ping fails if the physical port of the host does not have the
MTU size larger than 1500B.
36
Install vThunder on KVM
This section describes how to install vThunder on KVM. Before installing vThunder,
you must download the vThunder image.
The following topics are covered:
Step 1. Download the vThunder Image
37
Step 2. Create a vThunder VM
38
Step 3. Add a Bridge for Management Port
41
Step 4. Assign Management Bridge to Management Port
43
Step 5. Add a Bridge for Data Port
44
Step 6. Configure the Data Interface
45
Step 7. vThunder Installation
53
Step 8. Verify Installation
53
Step 9. Configure vThunder on KVM
54
Step 1. Download the vThunder Image
To download the vThunder image for new installation or upgrade, perform the
following steps:
1. Log into A10 Support Portal with your GLM account credentials.
NOTE:
The A10 sales team should have set up a GLM account for you
when you purchased the product. If you do not have a GLM
account, contact A10 Networks Sales team.
2. Navigate to the Software > Thunder Series > <ACOS_version> > vThunder
Appliance.
3. Click the required .iso or .qcow vThunder image for KVM to start the download.
4. Place the downloaded vThunder image on the Linux host where you want to
create a vThunder VM.
37
Feedback
Installing vThunder on KVM
Install vThunder on KVM
NOTE:
Ensure you have the necessary permissions and access rights to the
directory where you place the image.
Step 2. Create a vThunder VM
Once the vThunder software is downloaded and Minimum System Requirements are
met, you are ready to create a vThunder VM. The vThunder VM can be created using
the Cockpit or Virt Manager interface tool.
A10 Networks recommends you to create and manage vThunder VMs in KVM using
the Cockpit interface tool (version 264). For more information about managing VMs
in Cockpit Web Console, see Cockpit official documentation.
To create a vThunder VM using Cockpit, perform the following step:
1. Install Cockpit with the RHEL repository:
$ sudo dnf install cockpit -y
2. Manage KVM virtual machines using cockpit console:
$ sudo dnf install cockpit-machines -y
3. Enable the Cockpit socket service:
$ systemctl enable cockpit.socket
4. Start the Cockpit socket service:
$ systemctl start cockpit.socket
5. Verify the status of the Cockpit socket service:
$ systemctl status cockpit.socket
6. Configure the firewall:
$ sudo firewall-cmd --add-port=9090/tcp –permanent
$ sudo firewall-cmd –reload
7. Launch the Cockpit web interface and enter the server URL in a supported
browser.
For example:
38
Feedback
Installing vThunder on KVM
Install vThunder on KVM
https://<Server-IP>:9090
The Cockpit Web Interface Login (RHEL) page is displayed.
Figure 5 : Cockpit Web Interface Login
8. On the Cockpit Web Interface Login page, enter the root or sudo user credentials.
The Cockpit Web Interface page with the server name that you installed is
displayed.
Figure 6 : Cockpit Web Interface
9. On the RHEL Home page, click Virtual Machines.
39
Feedback
Installing vThunder on KVM
Install vThunder on KVM
10. On the Virtual Machines page, click Create VM.
11. On the Create Virtual Machine page, enter the Table 5 configuration parameters
and then click Create.
Table 5 : Create a Virtual Machine
Field
Description
Name
Enter a name for the virtual machine.
Installation Source
type
Select the installation source type as Local Install
Media.
Installation Source
Browse to the .iso image that you want to use
during the virtual machine installation.
Operating System
Select the supported operating system.
Storage
Select the hard disk capacity for the guest OS.
Size
Select the RAM size as Megabytes or Gigabytes.
Memory
Select the RAM memory size as Megabytes or
Gigabytes.
(Optional) Run
Unattended
Installation
Select the option to install unattended upgrades on
RHEL if not installed.
(Optional) Immediately
Start VM
Select the option to start VM immediately after
installation.
The vThunder VM is created successfully and the confirmation message is
displayed.
12. On the Virtual Machines page, click the recently created VM to edit the settings.
40
Feedback
Installing vThunder on KVM
Install vThunder on KVM
Figure 7 : Virtual Machines
13. Click Overview and click on the required parameters to edit or update the details.
For example, click vCPUs, on the vCPUs details page, update the vCPU Count and
vCPU Maximum fields with required values.
Figure 8 : vCPU details
14. Click Apply.
A confirmation message is displayed about the updated settings.
Step 3. Add a Bridge for Management Port
To add a bridge for a management port, perform the following step:
1. On the RHEL Home page, click Networking.
2. On the Networking page, under Interfaces, click Add bridge.
41
Feedback
Installing vThunder on KVM
Install vThunder on KVM
Figure 9 : Add Bridge
3. On the Bridge settings page, enter the configuration parameters as specified in
the Table 6 and click Apply.
Figure 10 : Bridge Settings
Table 6 : Add a Bridge for Management Port
Field
Description
Name
Enter a name for the bridge.
Ports
Select an interface that you want to assign to a subnet.
42
Feedback
Installing vThunder on KVM
Install vThunder on KVM
Table 6 : Add a Bridge for Management Port
Field
Description
(Optional) Options
Select the Spanning Tree Protocol (STP) option.
A confirmation message is displayed.
Step 4. Assign Management Bridge to Management Port
To assign the management bridge to a management port, perform the following step:
1. On the RHEL Home page, click Virtual Machines.
2. On the Virtual Machines page, click the required virtual machine for which you
want to assign an interface model.
3. On the Virtual Machine Details page, under Network Interfaces, click Edit.
Figure 11 : Network Interfaces
4. On the Virtual Network Interface Settings page, edit the configuration parameters
as specified in the Table 7 and click Save.
43
Feedback
Installing vThunder on KVM
Install vThunder on KVM
Figure 12 : Edit Virtual Network Interface Settings
Table 7 : Edit Virtual Network Interface Settings
Field
Description
Interface type
Select an interface type.
Source
Select a source.
Select an interface model.
Model
It is recommended to select e1000 or virtio model.
MAC address
The MAC address of the VM.
A confirmation message is displayed.
Step 5. Add a Bridge for Data Port
Create one data port to install a vThunder in KVM. Data port can either be VirtIO, SRIOV, or PCI-PT. In this scenario, VirtIO is used.
To add a bridge for a data port, perform the following step:
1. On the RHEL Home page, click Networking.
2. On the Networking page, under Interfaces, click Add bridge.
3. On the Bridge settings page, enter the configuration parameters as specified in
the Table 8 and click Apply.
44
Feedback
Installing vThunder on KVM
Install vThunder on KVM
Figure 13 : Bridge Settings
Table 8 : Add a Bridge for Management Port
Field
Description
Name
Enter a name for the bridge device.
Ports
Select an interface that you want to assign to the bridge.
(Optional)
Options
Select the Spanning Tree Protocol (STP) option.
A confirmation message is displayed.
Step 6. Configure the Data Interface
You can use any one of the following methods to configure data interface:
l
To configure KVM with Virtio, see Configure Virtio Data Interface.
l
To configure KVM with SR-IOV, see Configure SR-IOV Data Interface.
45
Feedback
Installing vThunder on KVM
Install vThunder on KVM
l
To assign KVM with PCI-PT, see Assign PCI-PT to vThunder.
Configure Virtio Data Interface
To configure Virtio or e1000 data interface, perform the following steps:
1. On the RHEL Home page, click Virtual Machines.
2. On the Virtual Machines page, click the required virtual machine for which you
want to configure Virtio or e1000 interface model.
3. On the Virtual Machine Details page, under Network Interfaces, click Add
Network Interface.
Figure 14 : Network Interfaces
4. On the Add Virtual Network Interface page, enter the configuration parameters
as specified in the Table 9 and click Add.
46
Feedback
Installing vThunder on KVM
Install vThunder on KVM
Figure 15 : Add Virtual Network Interface
Table 9 : Add Virtual Network Interface
Field
Description
Interface
type
Select an interface type as Bridge to LAN.
Source
Select the active network interface on your host.
Select an interface model.
Model
It is recommended to select e1000 or virtio model.
MAC
address
Select Generate automatically to generate the host MAC address
automatically. Else, select Set manually, to specify the host MAC
address manually.
A confirmation message is displayed.
5. Continue with Step 7. vThunder Installation.
NOTE:
The maximum throughput supported by the virtio interface as a
data interface is less than 1 Gbps, and virtio cannot be used to
support high-volume traffic, such as CGN and TPS applications due
to limitation in providing a predictable throughput.
47
Feedback
Installing vThunder on KVM
Install vThunder on KVM
Configure SR-IOV Data Interface
Before configuring the host for SR-IOV, ensure that:
l
Host is running on the updated operating system
l
Virtualization technology and Intel VT-d are enabled in the BIOS
l
Have root access on the host.
To configure SR-IOV data interface, perform the following steps:
1. Log into the Linux server.
2. Turn on the iommu parameter on the host Linux kernel as IOMMU support is
required for a VF to function when assigned to a VM.
For Intel cards, add the line "iommu=force intel_iommu=on" into the entry
"GRUB_CMDLINE_LINUX_DEFAULT=..." which is located in the directory
/etc/default/grub.
Example:
# cat /boot/grub2/grub.cfg
...
menuentry 'CentOS Linux (3.10.0-229.11.1.el7.x86_64) 7 (Core)' --class
rhel fedora --class gnu-linux --class gnu --class os --unrestricted
$menuentry_id_option 'gnulinux-3.10.0-229.el7.x86_64-advanced-7837218de353-4524-9141-782727d2f8ca' {
search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos1 --hintefi=hd0,msdos1 --hint-baremetal=ahci0,msdos1 --hint='hd0,msdos1'
c4e661a5-3f11-49a6-9a6b-be5a8e8e9881
...
...
linux16 /vmlinuz-3.10.0-229.11.1.el7.x86_64 root=UUID=7837218d-e3534524-9141-782727d2f8ca ro crashkernel=auto rhgb quiet LANG=en_US.UTF-8
systemd.debug intel_iommu=on
initrd16 /initramfs-3.10.0-229.11.1.el7.x86_64.img
}
3. Run the following command to apply the changes:
update-grub
48
Feedback
Installing vThunder on KVM
Install vThunder on KVM
4. Reboot the Linux server.
5. Run the following command to confirm your changes:
root@kvm:~# cat /proc/cmdline
Check the values in the output: iommu=force intel_iommu=on
6. To install the KVM-related software, run the following command:
root@kvm:~# sudo apt-get install qemu-kvm libvirt-bin RHEL-vm-builder
bridge-utils ethtool uuid
For Mellanox cards, install the latest MLNX_OFED driver on the server by running
the following command:
# mlnxofedinstall
7. Run the following commands to create a management interface called virbr0 and
add interfaces to it.
root@kvm:~#brctl addbr virbr0
root@kvm:~#brctl addif virbr0 eth0
root@kvm:~#ip addr add mgmt_ip/mgmt_mask dev virbr0
root@kvm:~#ip route add default via mgmt_gateway
8. Run the following command to verify if the SR-IOV device is listed, for example for
the Intel 82576 card:
root@kvm:~# lspci | grep 82576
The command provides output similar to the following:
0b:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network
Connection (rev 01)
0b:00.1 Ethernet controller: Intel Corporation 82576 Gigabit Network
Connection (rev 01)
0c:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network
Connection (rev 01)
The left-most portion of the lines above (for example, “0b:00.0”) are the unique
identifiers that are associated with the Ethernet controllers.
9. Copy the identifiers associated with the Ethernet controllers for use in the next
step.
49
Feedback
Installing vThunder on KVM
Install vThunder on KVM
10. (Recommended) Add the udev rules under /etc/udev/rules.d/ to ensure correct
and persistent port mapping between the front-panel port number and the host
kernel interface numbers.
Create a file named “A10_persistentrules” and save it in the following folder:
/etc/udev/rules.d/
Populate the file with the following commands, replacing each of the respective
kernel numbers with the PCIe numbers from the output of the lspci command
that was copied in Step 9.
For example,
ACTION=="add", SUBSYSTEM=="net", KERNELS=="0000:0b:00.0", NAME="eth0"
ACTION=="add", SUBSYSTEM=="net", KERNELS=="0000:0b:00.1", NAME="eth1"
ACTION=="add", SUBSYSTEM=="net", KERNELS=="0000:0c:00.0", NAME="eth2"
...... ......
11. Run the reboot command to initiate the Linux server reboot and to apply the
changes.
Create Virtual Functions for SR-IOV
To create virtual functions for SR-IOV on host, use the following command:
# echo 1 > /sys/class/net/<ethX>/device/sriov_numvfs Create_VF_SRIOV
In the above example, 1 VF is created.
Assign SR-IOV Virtual Function to vThunder
To assign the SR-IOV virtual function to the vThunder VM, perform the following
steps:
1. On the Virtual Machines page, under the Host devices section, click Add host
device.
Figure 16 : Host Device
2. On the Add host device page, enter the configuration parameters as specified in
50
Feedback
Installing vThunder on KVM
Install vThunder on KVM
the Table 10 and click Add.
Figure 17 : Add Host Device
Table 10 : Assign SR-IOV Function to vThunder VM
Field
Description
Type
Select an interface type as PCI.
Virtual Function
Select a virtual function.
The assigned SR-IOV virtual function details are displayed under Host devices.
3. Continue with Step 7. vThunder Installation.
Assign PCI-PT to vThunder
To assign the PCI-PT to the vThunder VM, perform the following steps:
1. On the RHEL Home page, click Virtual Machines.
2. On the Virtual Machines page, under the Host devices section, click Add host
device.
51
Feedback
Installing vThunder on KVM
Install vThunder on KVM
Figure 18 : Host Device
3. On the Add host device page, the configuration parameters as specified in the
Table 11 are displayed. Click Add.
Figure 19 : Add Host Device
Table 11 : Assign PCI-PT to VThunder VM
Field
Description
Type
Select an interface type as PCI.
PCI Device
Select a PCI device.
The assigned PCI-PT details are displayed under Host devices.
4. Continue with Step 7. vThunder Installation.
52
Feedback
Installing vThunder on KVM
Install vThunder on KVM
After data interface configuration, A10 Networks recommends you to install the
vThunder license by using the GLM. For information, see Global License Manager
User Guide .
Step 7. vThunder Installation
To install vThunder, perform the following steps:
1. On the RHEL Home page, click Virtual Machines.
2. On the Virtual Machines page, click the required virtual machine that you want to
install.
3. On the Overview tab, click Install.
After initiating the installation, you can view the installation progress on the
Console window.
4. On the login prompt, enter default login credentials (install/password).
The default credentials are used only for the initial setup.
5. Type YesS and press Enter to start the installation.
All partitions in the virtual hard drive are deleted. Only one default Shared
partition is created. To create more partitions, see Application Delivery Partition
guide.
After installation, vThunder VM will reboot automatically. When the system
boots up, it will boot into ACOS.
Step 8. Verify Installation
After installation and system boot up, the console prompts for the vThunder login
credentials.
To get started after the installation process is complete:
53
Feedback
Installing vThunder on KVM
Install vThunder on KVM
1. Enter the ACOS default administrative login credentials (admin/a10) to log into
the vThunder.
2. Type enable to enable the system and press Enter.
For password, press Enter. No password is required.
3. Type Configure to enter the configuration mode and press Enter.
4. Use the show interface brief command to view the interface details.
Step 9. Configure vThunder on KVM
To configure vThunder on KVM, see "Common Setup Tasks" section of the System
Administrator and Configuration Guide.
54
Additional Resources – Where to go from here?
After logging into the Thunder GUI or CLI, if you need further assistance in
configuring the device, refer to the following resources:
l
l
l
For more information on new features, known issues, and other necessary getting
started information, see New Features and Enhancements and Release Notes.
For more information on basic deployment instructions, see System
Configuration and Administration Guide.
For more information on a specific feature, see A10 Networks documentation.
More useful links:
l
vThunder Trial License
l
Global License Manager
55
©2025 A10 Networks, Inc. All rights reserved. A10 Networks, the A10 Networks logo, ACOS, A10 Thunder,
Thunder TPS, A10 Harmony, SSLi and SSL Insight are trademarks or registered trademarks of A10 Networks, Inc. in
the United States and other countries. All other trademarks are property of their respective owners. A10
Networks assumes no responsibility for any inaccuracies in this document. A10 Networks reserves the right to
Contact Us
change, modify, transfer, or otherwise revise this publication without notice. For the full list of trademarks, visit:
www.a10networks.com/company/legal/trademarks/.
Download