Uploaded by Santosh Sharma

vxrail-80-adminguide-int

advertisement
Dell VxRail 8.0.x
Internal Administration Guide
November 2023
Rev. 2
Notes, cautions, and warnings
NOTE: A NOTE indicates important information that helps you make better use of your product.
CAUTION: A CAUTION indicates either potential damage to hardware or loss of data and tells you how to avoid
the problem.
WARNING: A WARNING indicates a potential for property damage, personal injury, or death.
© 2023 Dell Inc. or its subsidiaries. All rights reserved. Dell Technologies, Dell, and other trademarks are trademarks of Dell Inc. or its
subsidiaries. Other trademarks may be trademarks of their respective owners.
Contents
Revision history.......................................................................................................................................................................... 7
Chapter 1: Introduction................................................................................................................. 8
Chapter 2: Manage VxRail account passwords............................................................................... 9
Change the VxRail management user password ........................................................................................................ 9
Change the VMware ESXi host management user password.................................................................................10
Chapter 3: Manage VLAN IDs and VxRail IP addresses.................................................................. 11
Add or remove the upstream DNS for internal DNS................................................................................................. 12
Change the external log server IP address in VxRail Manager............................................................................... 12
Change the IP address of the VMware vCenter Server VM...................................................................................15
Change the hostname and IP address for the VxRail Manager VM...................................................................... 16
Customize the default network IP address for docker ............................................................................................19
Chapter 4: Manage network settings........................................................................................... 21
Configure a VxRail node to support the PCIe adapter port.....................................................................................21
Configure host affinity for the VxRail Layer 3 management network..................................................................23
Configure jumbo frames.................................................................................................................................................. 24
Configure the proxy node for new segment node expansion.................................................................................26
Convert VxRail-managed VMware VDS to customer managed on a customer-managed VMware
vCenter Server...............................................................................................................................................................27
Enable a VxRail node to support the PCIE adapter port without an NDC connection.................................... 28
Enable dynamic LAG for two ports on a VxRail network.........................................................................................30
Verify the VxRail version on the cluster................................................................................................................ 30
Verify the health state of the VxRail cluster......................................................................................................... 31
Verify the VMware VDS health status.................................................................................................................... 31
Verify the VMware VDS uplinks................................................................................................................................31
Confirm isolation of the VxRail port group............................................................................................................32
Identify the NICs for LAG..........................................................................................................................................33
Identify assignment of the NICs to node ports....................................................................................................34
Identify the switch ports that are targeted for LAG using iDRAC.................................................................. 34
Prepare the switches for multichassis LAG .........................................................................................................35
Configure the first switch for LAG......................................................................................................................... 36
Configure the second ToR switch for LAG........................................................................................................... 36
Identify the load-balancing policy on the switches............................................................................................. 37
Configure the LACP policy on the VxRail VDS.....................................................................................................37
Verify the port flags....................................................................................................................................................38
Migrate the uplink to a LAG port.............................................................................................................................38
Migrate the LACP policy to the standby uplink................................................................................................... 39
Move the second VMNIC to LAG.............................................................................................................................41
Verify LAG connectivity on VxRail nodes.............................................................................................................. 42
Verify that LAG is configured in the VMware VDS............................................................................................. 42
Enable dynamic LAG for four ports on a VxRail network........................................................................................ 43
Verify the VxRail version on the VxRail cluster....................................................................................................43
Contents
3
Verify the health state of the VxRail cluster........................................................................................................ 43
Verify the VMware VDS health status................................................................................................................... 44
Verify the VMware VDS uplinks...............................................................................................................................44
Confirm isolation of the VxRail port group............................................................................................................44
Identify the NICs for LAG..........................................................................................................................................45
Identify assignment of the NICs to node ports....................................................................................................46
Identify the switch ports that are targeted for LAG using LLDP.................................................................... 46
Identify the switch ports that are targeted for LAG using iDRAC.................................................................. 47
Prepare the switches for multichassis LAG ......................................................................................................... 47
Identify the load-balancing policy on the switches............................................................................................. 48
Configure the LACP policy on the VxRail VDS.....................................................................................................49
Migrate the LACP policy to standby uplink...........................................................................................................50
Migrate an unused uplink to a LAG port.................................................................................................................51
Configure the first switch for LAG......................................................................................................................... 53
Verify LAG connectivity on the switch.................................................................................................................. 53
Verify LAG connectivity on VxRail nodes.............................................................................................................. 54
Move VMware vSAN or VMware vSphere vMotion traffic to LAG................................................................54
Verify that LAG is configured in the VMware VDS.............................................................................................55
Move the second VMNIC to LAG............................................................................................................................55
Configure the second ToR switch for LAG...........................................................................................................56
Verify LAG connectivity on the second switch....................................................................................................56
Verify LAG connectivity on VxRail nodes.............................................................................................................. 57
Enable network redundancy across NDC and PCIe ports....................................................................................... 57
Verify that the VxRail version supports network redundancy..........................................................................59
Verify that the VxRail cluster is healthy................................................................................................................ 59
Verify the VxRail physical network compatibility.................................................................................................60
Verify the physical switch port configuration....................................................................................................... 61
Verify active uplink on the VMware VDS port groups post migration........................................................... 62
Add uplinks to the VMware VDS............................................................................................................................. 62
Migrate the VxRail network traffic to a new VMNIC......................................................................................... 63
Set the port group teaming and failover policies.................................................................................................64
Remove the uplinks from the VMware VDS......................................................................................................... 65
Reset the VMware vSphere alerts for network uplink redundancy................................................................ 65
Enable VMware vSAN RDMA in the VxRail cluster.................................................................................................. 66
Enable two VMware VDS for VxRail traffic................................................................................................................ 67
Use case 1 - Enable uplinks....................................................................................................................................... 68
Use case 2 - Modify the cluster uplink configuration......................................................................................... 70
Use case 3 - Modify the cluster uplink configuration..........................................................................................71
Migrate the satellite node to a VMware VDS..............................................................................................................71
Capture the satellite node VMware standard switch settings.......................................................................... 71
Create the VMware VDS for the satellite node................................................................................................... 72
Set the MTU on the VMware VDS..........................................................................................................................73
Create the VMware VDS port groups for the satellite node............................................................................ 73
Migrate the satellite node to the new VMware VDS.......................................................................................... 74
Modify the VMware VDS port group teaming and failover policy.........................................................................75
Optimize cross-site traffic for VxRail........................................................................................................................... 76
Configure telemetry settings using curl commands ...........................................................................................78
Configure telemetry settings from VxRail Manager............................................................................................78
Chapter 5: Manage VxRail cluster settings.................................................................................. 80
4
Contents
Configure external storage for standard clusters..................................................................................................... 80
Convert one VMware VDS with two uplinks to two VMware VDS with two uplinks....................................... 82
Convert one VMware VDS to two VMware VDS...................................................................................................... 83
Identify the port groups............................................................................................................................................. 83
Convert one VMware VDS with four uplinks to two VMware VDS with four uplinks/two uplinks...............84
Convert one VMware VDS with four uplinks to two VMware VDS with two uplinks ..................................... 85
Create a VMware VDS and assign two uplinks.................................................................................................... 85
Add existing VxRail nodes to VDS2........................................................................................................................ 85
Create the port group for VMware vSAN in VDS2.............................................................................................86
Create port group for VMware vSphere vMotion in VDS2...............................................................................86
Unassign uplink3 in VDS1...........................................................................................................................................86
Assign the released VMNIC to uplink1 in VDS2....................................................................................................86
Migrate the VMware vSAN VMkernel from VDS1 to VDS2 port groups....................................................... 87
Migrate the VMware vMotion VMkernel from VDS1 to VDS2 port groups ................................................. 87
Unassign uplink4 in VDS1...........................................................................................................................................88
Assign the released VMNIC to uplink2 in VDS2...................................................................................................88
Enable DPU offloads on VxRail...................................................................................................................................... 88
Enable the DPU offload after Day1 VxRail deployment......................................................................................89
Add a VxRail node....................................................................................................................................................... 90
Remove VxRail nodes..................................................................................................................................................91
Change the VxRail node IP address or hostname............................................................................................... 93
Enable Enhanced Linked Mode for VMware vCenter Server.................................................................................93
Repoint a single VMware vCenter Server node to an existing domain without a replication partner....94
Back up each VxRail node (optional)......................................................................................................................95
Repoint the VMware vCenter Server A of domain 1 to domain 2................................................................... 95
Update the VMware vCenter Server SSL certificates from VMware vCenter Server B.......................... 97
Refresh the node certificates in the VMware vCenter Server A.....................................................................97
Repoint the VMware vCenter Server node to a new domain...........................................................................97
Enable large cache tier capacity before VxRail cluster initialization.....................................................................98
Enable large cache tier capacity for an existing VxRail cluster............................................................................. 99
Remediate the CPU core count after node addition or replacement ................................................................ 100
Update the cluster status.........................................................................................................................................101
Trigger a rolling update............................................................................................................................................ 103
Submit install base updates for VxRail....................................................................................................................... 103
View CloudIQ information in VxRail.............................................................................................................................104
Chapter 6: Manage witness settings.......................................................................................... 105
Change the hostname and IP address of the witness sled................................................................................... 105
Change the IP address of the VxRail-managed witness sled..........................................................................105
Change the hostname of the witness sled.......................................................................................................... 109
Change the IP address of the VxRail-managed Witness VM.................................................................................113
Collect the VxRail-supplied witness configuration................................................................................................... 117
Separate witness traffic on an existing stretched cluster..................................................................................... 118
Chapter 7: Collect log bundles................................................................................................... 124
Collect the VxRail Manager log bundle.......................................................................................................................124
Collect log bundles from VxRail Manager.................................................................................................................. 125
Collect the VMware vCenter Server log bundle...................................................................................................... 125
Collect the VMware ESXi log bundle.......................................................................................................................... 126
Contents
5
Collect the iDRAC log bundle........................................................................................................................................127
Collect the platform log bundle.................................................................................................................................... 127
Collect the log bundle with node selection................................................................................................................128
Collect the log bundle with component selection....................................................................................................129
Collect the full log bundle.............................................................................................................................................. 130
Collect the witness log bundle...................................................................................................................................... 131
Delete log bundles from VxRail Manager....................................................................................................................131
Collect the satellite node log bundles from VxRail Manager................................................................................. 131
Delete the satellite node bundles from VxRail Manager ....................................................................................... 132
Chapter 8: Manage certificates.................................................................................................. 133
Import VMware vSphere SSL certificates to VxRail Manager............................................................................. 133
Import the VMware vCenter Server certificates into the VxRail Manager trust store.................................. 135
Import the VMware ESXi host certificates to VxRail Manager............................................................................ 137
Chapter 9: Rename VxRail components...................................................................................... 139
Change the FQDN of the VMware vCenter Server Appliance............................................................................. 139
Chapter 10: Remove VxRail nodes.............................................................................................. 143
Verify the VxRail cluster health....................................................................................................................................143
Verify the capacity, CPU, and memory requirements............................................................................................ 143
Remove the node.............................................................................................................................................................144
Reboot VxRail nodes.......................................................................................................................................................145
Chapter 11: Restore the VMware vCenter Server from a file-based backup.................................146
Chapter 12: VxRail Manager file-based backup........................................................................... 162
Back up the VxRail Manager manually........................................................................................................................162
Back up VxRail Manager................................................................................................................................................ 163
Configure automatic backup for the VxRail Manager.............................................................................................164
Manage the backup policy.............................................................................................................................................165
Chapter 13: VxRail Manager file-based restore........................................................................... 167
Restore the VxRail Manager using external DNS.....................................................................................................167
Restore the VxRail Manager using internal DNS...................................................................................................... 171
6
Contents
Revision history
Date
Revision
Description of change
November 2023
2
Updated for VxRail 8.0.200.
August 2023
1
Initial release.
Revision history
7
1
Introduction
This document describes some of the administrative tasks that you can perform for VxRail. Customer-facing procedures, as well
as procedures intended for partners and service personnel are included.
Security vulnerabilities
Before you perform any service or maintenance activity, see the VMware and Dell Technologies documentation on the critical
Apache log4j security vulnerability and implement the workarounds as documented. DSA-2021-265: Dell VxRail Security Update
for Apache Log4j Remote Code Execution Vulnerability (CVE-2021-44228) KB 194466.
Audience
This document is intended for field personnel and partners who want to manage and operate VxRail clusters.
This document is also designed for people familiar with:
●
●
●
●
Dell Technologies systems and software
VMware virtualization products
Data center appliances and infrastructure
SolVe Online for VxRail
See the VxRail Documentation Quick Reference List for a complete list of VxRail documentation.
8
Introduction
2
Manage VxRail account passwords
When a management account changes or expires, VxRail Manager mutes health monitoring and displays alerts. After VxRail
Manager passwords are updated, the system returns to normal state and health monitoring is unmuted.
The following accounts are set up during deployment with default logins:
● VMware vCenter Server administrator: The administrator account provides full authorization to all VMware vCenter Server
operations. The account name should be administrator@vsphere.local for customer-managed and VxRail-managed
VMware vCenter Servers.
● Management account: A management account for VxRail is created on the VMware PSC and each VMware ESXi host
as @vsphere.local. In the VMware PSC, the VMware HCIA management permission is obtained after deployment. In
each VMware ESXi host, administrator permission is assigned after deployment. The customer selects the username during
deployment. For a customer-managed VMware vCenter Server, the customer creates this account without any permission or
group assigned.
● VMware vCenter Server and VMware PSC account: This account is the existing Linux root account in the VMware vCenter
Server and VMware PSC used for script execution and file uploading.
● VMware ESXi host root account: This is the existing VMware ESXi root account for each host that is used for script
execution and file uploading.
For account and password rules, see KB 158231.
Change passwords
Use the following to change passwords:
●
●
●
●
●
●
●
●
Change the VMware vCenter Server root password and settings
Change the VMware vCenter Server SSO password
Change the VMware vCenter Server Appliance root password
Change the customer-managed VMware vCenter Server Appliance root password
Change the VMware ESXi host root password
To change the VxRail root password, use the passwd command.
To change the VxRail mystic password, use the passwd command.
For the iDRAC username and password, see KB 133536.
Change the VxRail management user password
A default VxRail management account password is entered during deployment. You can change this password after deployment.
About this task
The following requirements apply for passwords:
● Eight to 20 characters
● One lowercase letter
● One uppercase letter
● One numeric character
● One special character
This procedure applies to the VxRail cluster running VxRail 8.0.x and later versions. VxRail 8.0.x cluster manages a VxRailmanaged VMware vCenter Server or a customer-managed VMware vCenter Server.
This procedure is intended for customers, Dell Technologies employees, and partners who are authorized to work on a VxRail
cluster.
Manage VxRail account passwords
9
Steps
1. To change the VMware management user password, perform the following:
a.
b.
c.
d.
e.
f.
Log in to the VMware vSphere Web Client as an administrator.
Click Administration from the main menu.
Under Single Sign On, click Users and Groups.
From the Domain drop-down list, select vsphere.local.
Select the VxRail Management username and click EDIT.
In the Edit User window, enter and confirm the password and then click Save.
2. To apply the password changes, perform the following:
a.
b.
c.
d.
Select the target cluster, and click the Configure tab.
Under VxRail, click System.
Click Update passwords.
In the Update Passwords wizard, enter the new password and click SUBMIT, and then click FINISH.
Change the VMware ESXi host management user
password
Change the VMware ESXi host management user password.
Prerequisites
Go to VMware Docs and search for ESXi Passwords and Account Lockout.
About this task
This procedure is intended for customers, Dell Technologies employees, and partners who are authorized to work on a VxRail
cluster. To change the VMware ESXi host root password, see Changing an ESXi/ESX host root password.
Steps
1. Log in to the VMware ESXi Host Client as a root user at https://<esxi_host_fqdn_or_ipaddr>/ui.
2. Select Host > Manage > Security & users > Users.
3. Select the VxRail management user and click Edit user.
4. In the Edit User window, enter the new password in the Password window.
5. Reenter the new password in the Confirm Password window and click Save.
6. To apply the password changes, perform the following:
a.
b.
c.
d.
10
Select the target cluster, and click the Configure tab.
Under VxRail, click System.
Click Update passwords.
In the Update Passwords wizard, enter the new password and click SUBMIT, and then click FINISH.
Manage VxRail account passwords
3
Manage VLAN IDs and VxRail IP addresses
Change VLAN IDs, and VxRail IP addresses using the following links.
For dynamic node clusters, procedures that are related to VMware vSAN or witness traffic are not applicable.
Whenever you make changes to a subnet, the changes must be within the same subnet. Changes outside the subnet are not
supported.
Change VLAN IDs
Use the following links to change VLAN IDs:
● To change the VLAN ID for VM networks, VMware vSphere vMotion, VMware vSAN, or the VxRail-managed VMware
vCenter Server Appliance, see Change the VLAN ID of the VM Networks.
● To change the VLAN ID of Management and VMware vCenter Server Network, see Configure Virtual Machine Networking
on a vSphere Distributed Switch.
● To change the VLAN ID of witness port group in the L3 configuration for a 2-node cluster, see Deploying a vSAN Witness
Appliance.
Repoint NTP or DNS server IP addresses
Use the public API to repoint the NTP server IP address or DNS server IP address. For additional information to change the DNS
server IP address, go to DNS server IP on VxRail 8.0 releases using the rest API. To repoint to a new DNS server IP address, see
VxRail API- Set DNS of VxRail cluster.
To add or remove the upstream DNS for internal DNS, go to Add or remove the upstream DNS for internal DNS
Change IP addresses
Use the following links to change VxRail IP addresses:
● To change the VMware ESXi IP address or hostname, go to VxRail Manager.
● To change the VxRail Manager VM hostname and IP address, go to Change the VxRail Manager VM hostname and IP
address
● To change the IP address of the VMware vCenter Server VM, see Change the IP address of the VMware vCenter Server VM
● To change the IP Address of the VMware vSphere vMotion network, see Manage VMkernel Network Adapters in the
VMware Host Client.
● Change the IP Address of the vSAN Network for the customer-managed VMware vCenter Server Appliance (internal linkrequires login)
● Change the IP Address of the vSAN Network for the VxRail-managed VMware vCenter Server Appliance (internal linkrequires login)
● Change the IP Address of Witness Traffic in an L2 configuration for a 2-node cluster
● Change the IP Address of Witness Traffic in an L3 configuration for a 2-node cluster
Customize the default network IP address for docker
To change the IP address and the CIDR or the RKE cluster, go to Customize the default network IP address for docker
Manage VLAN IDs and VxRail IP addresses
11
Add or remove the upstream DNS for internal DNS
Add or remove the upstream DNS when using an internal DNS. If a cluster uses an external DNS, you can resolve the FQDN
outside the cluster. If a cluster uses an internal DNS, add the record manually.
Prerequisites
Verify that the VxRail cluster is using the internal DNS.
Download the python upstream_dns_operation.py script (.zip) and extract the file from https://dl.dell.com/
downloads/DL100623_upstream_dns_operation.zip.
About this task
This procedure applies to the VxRail cluster running the VxRail version 8.0.x and later.
This procedure is intended for Dell Technologies employees and partners who are authorized to work on a VxRail cluster. This
procedure is not available to customers.
Steps
1. Log in to the VxRail Manager as mystic.
2. To switch to root, enter:
su root
3. To add the upstream DNS, enter:
python upstream_dns_operation.py add -s <upstream_dns_ipaddress>
4. To verify that the upstream DNS that is added, enter:
nslookup <new_FQDN>
The new FQDN must be from the upstream DNS and not by the internal DNS.
5. To remove the upstream DNS, enter:
python upstream_dns_operation.py remove -s <upstream_dns_ipaddress>
6. To view the DNS help options, enter:
python upstream_dns_operation.py -h
Change the external log server IP address in VxRail
Manager
After you initialize VxRail, you can either change your log server IP address or deploy a new log server with a different IP
address. Then you can update the VxRail cluster to the external log servers new IP address.
Prerequisites
●
●
●
●
For a new external log server, verify that the logging service provides UDP syslog reception on port 514.
Configure the VMware vCenter Server for the new syslog servers.
Back up the /var/lib/vmware-marvin/config-initial.json files before making changes.
Update the /var/lib/vmware-marvin/config-initial.json files with the new external log server IP addresses.
About this task
This procedure applies to the VxRail-managed VMware vCenter Server with an external log server manages the VxRail 8.0.x and
later cluster. See the VxRail 8.0.x Support Matrix for a list of supported versions.
12
Manage VLAN IDs and VxRail IP addresses
This procedure is intended for Dell Technologies customers, employees, and partners who are authorized to work on a VxRail
cluster.
Steps
1. Log in to the VMware vSphere Web Client as an administrator.
2. From the Inventory icon, select the VxRail Manager VM.
3. From the Summary tab, click LAUNCH REMOTE CONSOLE or LAUNCH WEB CONSOLE and log in to the VxRail
Manager as root.
4. To copy the file, enter:
cp /usr/lib/vmware-marvin/marvind/webapps/ROOT/WEB-INF/classes/config/vxrail-syslogconfig /etc/rsyslog.conf
5. To locate *.* @loginsightip:514 and replace with *.* @<sys_log_server_ip_address>:514, enter:
sed -i 's/loginsightip/<sys_log_server_ip_address>/g' /etc/rsyslog.conf
Keep the space before @ and use the real syslog server IP address.
6. To append the short.term.log, keep a blank line below the command line and enter:
cat <<EOF>>/etc/rsyslog.conf
\$InputFileName /var/log/microservice_log/short.term.log
\$InputFileTag VxRail
\$InputFileStateFile VxRail-Log-State
\$InputRunFileMonitor
EOF
7. To stop the VxRail services, enter:
service vmware-marvin stop
8. To edit /etc/rsyslog.conf and to change the IP address, enter:
edit /etc/rsyslog.conf
9. Locate Marvin log to loginsight as shown in the following example:
#
# Marvin log to loginsight
#
$ModLoad imfile
$InputFileName /var/log/vmware/marvin/tomcat/logs/marvin.log
$InputFileTag VxRail
$InputFileStateFile VxRail-Log-State
$InputRunFileMonitor
*.* @sys_log_server_IP_address:514
###
10. To change the IP address on the bolded line with the new IP address, enter:
*.* @<new_external_log_server_ip_address>:514
wq
11. Click Save and close the file.
12. To update the IP address in cluster_properties table in marvin, perform the following:
a. Log in to the marvin database and enter:
psql -U postgres -d vxrail;
Manage VLAN IDs and VxRail IP addresses
13
b. To update the log server IP address , enter:
update configuration.configuration SET value='new_log_server_ip_address' where
category='setting' and key='sys_log_server' and value='sys_log_server_ip_address';
\q
c. To exit the database, enter: \q
13. To update the configuration JSON files on the VxRail Manager VM, perform the following:
a. Use an editor such as vi and search for the keyword global followed by the keyword syslog_servers.
b. Modify the parameter syslog_servers: Original external log Server ip_address with the External
log server ip_address as shown in the following example:
Parameters
"global": {
"ntp_servers": [],
"is_internal_dns": false,
"dns_servers": [
"20.100.10.7"
],
"syslog_servers": ["20.100.10.10 "],
…..
NOTE: The format of the JSON file is a continuous string of characters and must be saved in that format.
14. To start the VxRail and messaging logging services, enter:
service vmware-marvin restart
service rsyslog restart
15. From the log server, transfer the log entries in marvin.log to the syslog service output log file. For example, log entries
should appear in /var/log/syslog if /var/log/syslog is configured as the syslog service output log file.
16. Log in to the VMware vCenter Server VAMI on port 5480.
17. Select the Access tab.
18. Click Edit and disable the VMware vCenter Server SSH Login.
19. Log in to the VMware vSphere Web Client as administrator.
20. To edit /etc/rsyslog.conf, enter:
vi /etc/rsyslog.conf
21. Locate the vpxd log to loginsight as shown in the following example:
$ModLoad imfile
#Add all the required VC log files to be redirected to the syslog server
#vpxd-log
$InputFileName /var/log/vmware/vpxd/vpxd.log
$InputFileTag vcsa-vpxd
$InputFileStateFile vcsa-vpxd-logstate
$InputRunFileMonitor
*.* @sys_log_server_ip_address
22. To change the IP address on the *.* @sys_log_server_ip_address line with the new IP address, enter:
*.* @<new_external_log_server_ip_address>:514
wq
systemctl restart rsyslog
23. Click Save and close the file.
14
Manage VLAN IDs and VxRail IP addresses
24. Restore the changes that are made to the VMware vCenter Server configuration on port 5480 for security considerations.
25. From the log server, verify that you transferred the log entries in vCenter vpxd.log to the syslog service output log file.
For example, log entries should appear in /var/log/syslog if /var/log/syslog is configured as the syslog service
output log file.
Change the IP address of the VMware vCenter Server
VM
If the VMware vCenter Server system name is an FQDN, you can change the IP address for the VxRail-managed VMware
vCenter Server.
Prerequisites
When the VMware ESXi enters maintenance mode, some configurations cannot be protected against the VM failures and must
be shut down.
The following VxRail cluster configurations are not protected:
● VxRail clusters with three nodes.
● VxRail all-flash clusters with four nodes and RAID 5.
● VxRail all-flash clusters with six nodes and RAID 6.
CAUTION: If the VMs are not shut down, failures may occur.
About this task
During deployment, if you set the IP address as a system name, you cannot change the IP address. The system name is used as
the primary network identifier.
This procedure applies to the VxRail cluster running the VxRail version 8.0.x and later.
See the VxRail 8.0.x Support Matrix for a list of supported versions.
See KB 2130599 for more information.
CAUTION: You cannot perform these steps in a VMware VCF and VVD environments.
This procedure is intended for Dell Technologies employees and partners who are authorized to work on a VxRail cluster. This
procedure is not available to customers.
Steps
1. Log in to the VMware vSphere Web Client as an administrator.
2. If the cluster is using an external DNS, manually update the DNS server. If the cluster is using an internal DNS, log in to
VxRail Manager and update the internal DNS server.
a. Using SSH, log in to the VxRail Manager VM as mystic.
b. To switch the user to root, enter:
su root
c. To change the IP address, enter:
sed -i "s/<old_ip_addr>/<new_ip_addr>/g" /etc/hosts
d. To restart the dnsmasq service, enter:
systemctl restart dnsmasq
3. To clear the VMware vCenter Server DNS cache, perform the following:
a. Log in to the VMware vCenter Server Appliance management interface (VAMI) as root at https://
<vCSA_original_ip_addr>:5480
b. From the left-menu, select Access.
c. If the SSH Login is disabled, click EDIT.
d. Enable the SSH Login and click OK.
Manage VLAN IDs and VxRail IP addresses
15
e. Using SSH, log in to the VMware vCenter Server as root.
f. To restart the dnsmasq service, enter:
systemctl restart dnsmasq
If you are using an internal DNS server, update your local client using the DNS or local host file.
4. To modify the IPv4 address settings in VAMI, perform the following:
a.
b.
c.
d.
e.
f.
g.
h.
Log in to the VAMI as root at https://<vCSA_original_IP_addr>:5480.
From the left-menu, select Networking and click EDIT.
Select the Network Adapter and click NEXT.
Expand the network interface name to update the IP address settings.
Enter the updated IPv4 address settings and click NEXT.
Enter the SSO administrator credentials for the VMware vCenter Server and click NEXT.
Review the requested changes to the VMware vCenter Server IP address settings.
Scroll down to acknowledge that the VMware vCenter Server backup is taken and click FINISH.
You may lose connectivity when updating the VMware vCenter Server IP address. On the UI, the process may freeze
around 50 percent.
5. To add and reconnect the VMware ESXi hosts to the VMware vCenter Server Inventory, perform the following:
If the VMware vCenter Server version is 7.0, use the FQDN to log in to the VMware vCenter Server. Ensure that the IP
address is reverse resolvable to FQDN on the client. See KB 71387 for more information.
a. Log in to the VMware vSphere Web Client as an administrator.
When you return to the VMware vCenter Server, the VMware ESXi hosts display Not responding.
b. Reconnect to each VMware ESXi host manually. Right-click the VMware ESXi host and select Connection > Connect.
c. To add the VMware ESXi host to the VMware vCenter Server Inventory, confirm the VMware ESXi FQDN and click
NEXT.
d. Enter the root username and the VMware ESXi hosts root password. Click NEXT.
e. Accept the default settings for the host summary and VM location, and click FINISH.
f. Repeat these steps on all the VMware ESXi hosts.
6. OPTIONAL: To connect to the VMware ESXi hosts, perform the following:
Using the VMware ESXi shell or SSH, you might have to restart the Management agents. See Restarting the Management
agents in ESXi for more information.
a. Log in to the VMware vSphere Web Client as administrator and select Hosts.
When you return to the VMware vCenter Server, the VMware ESXi hosts display Not responding.
b. If any VMware ESXi hosts are RED, reconnect to the VMware ESXi host manually. Right-click VMware ESXi host host
and select Connection > Connect.
c. Click Yes.
7. Update the Log Insight.
If you have the VMware Log Insight VM, update with the new IP address for the VMware vCenter Server. Go to VMware
Docs and see the Log Insight Configuration Guide for more information.
Change the hostname and IP address for the VxRail
Manager VM
After deployment, you can update the static IPv4 address (netmask or gateway address) and the hostname that is assigned to
the VxRail Manager VM.
Prerequisites
● Verify access to the VxRail Manager VM using the remote console or SSH. If you are using SSH, log in to the VxRail Manager
as mystic and switch to root.
● Log in to the VMware vCenter Server using the VMware vSphere Web Client and take a snapshot of the VxRail Manager
VM. Also, take a snapshot of all the service VMs such as VMware vCSA and VMware vRealize Log Insight.
16
Manage VLAN IDs and VxRail IP addresses
About this task
You cannot perform this task in a VxRail VCF and VMware Validated Design (VVD) environments.
This procedure applies to the VxRail cluster running the VxRail version 7.0.x and VxRail version 8.0.x and later. See the VxRail
7.0.x Support Matrix and VxRail 8.0.x Support Matrix for a list of supported versions.
This procedure is intended for Dell Technologies employees and partners who are authorized to work on a VxRail cluster. This
procedure is not available to customers.
Steps
1. For the external DNS, update the DNS server with the <new_vxm_ipaddr> and <new_vxm_fqdn> and restart the service.
2. Log in to the VMware vSphere Web Client as an administrator.
3. To turn off the Health Monitoring Status, perform the following:
a.
b.
c.
d.
Select the Inventory icon and select the VxRail cluster.
Click the Configure tab and select VxRail > Health Monitoring.
Under VxRail Cluster Health Monitoring, disable health monitoring.
Disable the Health Monitoring Status.
4. To update the port group VLAN of the VxRail Manager when the VLAN changes, perform the following:
a.
b.
c.
d.
e.
Select the Inventory icon and select the VxRail cluster.
Select the VxRail Manager VM.
Click the Summary tab, in the Related Objects window under Networks, locate the port group name.
From the left menu, select a VxRail cluster and click the Networks tab.
Right-click the port group and click Edit Settings to modify the VLAN ID.
5. To update the IP address of the VxRail Manager, perform the following:
a.
b.
c.
d.
e.
Select the Inventory icon and select the VxRail cluster.
Select the VxRail Manager VM.
Click the Summary tab and select LAUNCH REMOTE CONSOLE.
Log in to the VxRail Manager as mystic.
To switch to root, enter:
su root
f. To get the VxRail Manager FQDN, enter:
/opt/vmware/share/vami/vami_fullhostname
g. To change the IP address, enter:
/opt/vmware/share/vami/vami_set_network eth0 STATICV4 <new_vxm_ipaddr>
<new_vxm_netmask> <new_vxm_gateway>
h. To verify the IP address, enter:
/opt/vmware/share/vami/vami_get_network
i.
To reset the hostname, enter:
/opt/vmware/share/vami/vami_set_hostname <old_vxm_fqdn>
Connect to the VxRail Manager with the recently configured IP address and verify the changes. After you change the IP
address, if the VxRail cluster uses the internal DNS, the cluster may lose its connection on all the nodes. If you lose the
connection, use SSH to reconnect to the VxRail Manager using the new IP address.
6. To change the internal DNS address on VAMI, perform the following:
a. Log in to VAMI on at https://<vmwarevCenter_fqdn>:5480.
Wait for few seconds and verify that VMware vSphere is reconnected on all the VMware ESXi nodes.
b. To update the system_dns field in the global configuration, enter:
curl -i -X PUT --unix-socket /var/lib/vxrail/nginx/socket/nginx.sock http://localhost/
rest/vxm/internal/configservice/v1/configuration/keys/system_dns -H "Content-Type:
application/json" -d '{ "value" : "<new_dns_ips>" }'
c. Click the Configure tab and select Networking > TCP/IP configuration and click Edit to update the DNS settings on
the VMware ESXi nodes.
d. Repeat Step c on all the VMware ESXi nodes on the VMware vSphere.
7. To change the VxRail Manager hostname, perform the following:
a. From the VxRail Manager console, enter:
/opt/vmware/share/vami/vami_set_hostname <new_vxm_fqdn>
Manage VLAN IDs and VxRail IP addresses
17
b. To verify the hostname settings, enter:
/opt/vmware/share/vami/vami_fullhostname
8. From the VxRail Manager console, to stop the services of vmware-marvin, runjars, and vmware-loudmouth enter:
service vmware-marvin stop
service runjars stop
service vmware-loudmouth stop
9. To update the vxm_host field in the global configuration, perform the following:
a. From the VxRail Manager console, to update the VxRail Manager IP address, enter:
curl --location --request PUT 'http://127.0.0.1/rest/vxm/internal/configservice/v1/
configuration/keys/vxm_host' --header 'Content-Type: application/json' --unixsocket /var/lib/vxrail/nginx/socket/nginx.sock --data-raw '{"value": "<new_vxm_ip>"}'
b. To verify changes, enter:
curl --location --request GET 'http://127.0.0.1/rest/vxm/internal/configservice/v1/
configuration/keys/vxm_host' --header 'Content-Type: application/json' --unixsocket /var/lib/vxrail/nginx/socket/nginx.sock
10. To update the VxRail Manager certificate for the hostname or IP address change, perform the following:
a. Select the Inventory icon and select the VxRail cluster.
b. Click the Configure tab and select System > Certificate.
c. To replace an existing certificate, import a new certificate that complies with the FQDN standard.
You cannot import a self-signed certificate.
d. To modify the ca.nf file and generate a new certificate, enter:
vi /etc/vmware-marvin/ssl/ca.cnf
[v3 req]
basicConstraints = CA:false
extendedKeyUsage = serverAuth, clientAuth
subjectAltName = @alt_names
[req_distinguished_name]
countryName = US
stateOrProvinceName = local
localityName = vsphere
0.organizationName = VMware
organizationalUnitName = VxRailApplianceServer
commonName = c3-vxm-new.rackd17.local <new_vxm_hostname>
[alt_names]
DNS.1 = c3-vxm-new.rackd17.local <new_vxm_hostname>
IP.1 = 20.12.13.211 <new_vxm_ipaddr>
If you must import a self-signed certificate, go to OpenSSL or contact your system administrator.
11. To restart all the VxRail services from the VxRail Manager console, enter:
kubectl --kubeconfig /etc/rancher/rke2/rke2.yaml -n helium scale deployment/api-gateway
--replicas=0
kubectl --kubeconfig /etc/rancher/rke2/rke2.yaml -n helium scale deployment/api-gateway
--replicas=1
service vmware-loudmouth start
service vmware-marvin start
systemctl status vmware-marvin
service runjars start
systemctl status runjars
12. To change the IP address of the VxRail Manager, perform the following:
a. Select the Inventory icon and select the VxRail cluster.
b. Click the Summary tab and scroll down to Custom Attributes and click Edit.....
c. Enter the new IP address that you set earlier in the VxRail-IP attribute value and click OK.
13. To turn on health monitoring, perform the following:
18
Manage VLAN IDs and VxRail IP addresses
a. Select the Inventory icon and select the VxRail cluster.
b. Click the Configure tab and select VxRail > Health Monitoring.
c. Enable the Health Monitoring Status.
14. To update the subscription callback if the VxRail Manager hostname is changed, perform the following:
a. From the VxRail Manager console, enter:
python /usr/lib/vmware-marvin/marvind/webapps/ROOT/WEB-INF/classes/scripts/updatesubscription-callback.py -o <old_vxm_fqdn> -n <new_vxm_fqdn>
For example:
c3-vxm:/home/mystic # python /usr /lib/vmware-marvin/marvind/webapps/
ROOT/WEB-INF/classes/scripts/update-subscription-callback.py -o c3vxm.rackD17.local
-n c3-vxm-new.rackD17.local
Number of hosts need to be registered: 3
Old subscription callback has been deleted, sn:7ZVF823, old address:
vxm.rackD17.local
New subscription callback has been created, sn:7ZVF823, new address:
new.rackD17.local
Old subscription callback has been deleted, sn:7ZXL823, old address:
vxm.rackD17.local
New subscription callback has been created, sn:7ZXL823, new address:
new.rackD17.local
Old subscription callback has been deleted, sn:7ZVL823, old address:
vxm.rackD17.local
New subscription callback has been created, sn:7ZVL823, new address:
new.rackD17.local
Finished to update all hosts subscription callback in the VXM.
c3-vxm: /home/mystic #
c3c3-vxmc3c3-vxmc3c3-vxm-
15. To avoid the VMware vCenter Server upgrades issue, see KB 172315.
Customize the default network IP address for docker
Configure the default network for the RKE2 cluster.
About this task
This procedure applies to the VxRail cluster running the VxRail version 7.0.370 and later or VxRail 8.0.x and later. See the VxRail
7.0 Support Matrix or VxRail 8.0.x Support Matrix for a list of supported versions.
You can customize the dummy0 network interface. The default for VxRail Manager has the dummy0 interface with the IP
address 172.28.177.1/32. If there is a conflict with your IP address on your LAN, specify another IP address for the
dummy0 interface.
This procedure is intended for Dell Technologies employees and partners who are authorized to work on a VxRail cluster. This
procedure is not available to customers.
Steps
1. Using SSH, log in to the VxRail Manager as mystic.
2. To switch to root, enter:
su root
3. To change the IP address of the dummy0 interface for VxRail Manager from 172.28.177.1/32, perform the following:
a. To view the configuration of the dummy0 interface and edit, enter:
/etc/sysconfig/network/ifcfg-dummy0
b. Update the IPADDR field with a new IP address.
4. To restart the network service, enter:
Manage VLAN IDs and VxRail IP addresses
19
systemctl restart network
Wait for a few seconds and verify that the network is restarted.
5. To restart the RKE2 cluster and run an RKE2 precheck, enter:
# bash /usr/local/bin/rke2-precheck.sh
Wait for a few seconds for the RKE2 to restart.
6. To change the CIDR of the RKE2 cluster, enter:
By default, the VxRail Manager is configured with CIDR for RKE2 services and pods with IP addresses as
172.28.176.0/24 and 172.28.175.0/24. If there is an IP address conflict with your LAN configuration, specify
another IP address range for the CIDR for RKE2.
# bash /usr/local/bin/rke2-reset-cidr.sh -s="<xx.xx.xx.xx/xx>" -c="<xx.xx.xx.xx/xx>"
Where:
-c --cluster-cidr="<xx.xx.xx.xx/xx>"
-s --service-cidr="<xx.xx.xx.xx/xx>"
Wait for few seconds for the RKE2 cluster to restart.
For example:
# bash /usr/local/bin/rke2-reset-cidr.sh -s="10.42.0.1/24" -c="10.43.0.1/24"
NOTE: The cluster-dns ends with 10 with the same prefix of service-cidr. For example, if the service-cidr
=10.42.0.1/24, the cluster-dns is 10.42.0.10. The netmask value must be equal to or less than 24.
20
Manage VLAN IDs and VxRail IP addresses
4
Manage network settings
Manage the NIOC configuration and change NIC ports.
Use the following links to manage some network settings:
● To change the default VMware VDS NIOC configuration, see VxRail Change Default VDS NIOC configuration.
● To change the physical NIC ports of VM network traffics, see VxRail Change Physical NIC Ports of VM Network Traffic.
● To share network traffic with VMware vSAN, see Configure Bandwidth Allocation for System Traffic.
Configure a VxRail node to support the PCIe adapter
port
You can use advanced NIC definition with flexible configurations without an NDC connection. Configure the code for VxRail
initialization and node expansion to use the PCIE adapter.
Prerequisites
Before you configure the node:
● Go to the Day 1 public API to verify that the NIC profiles in the API are ADVANCED_VXRAIL_SUPPLIED_VDS and
ADVANCED_CUSTOMER_SUPPLIED_VDS.
● Verify that the node has enough spare PCIe NICs for configuration.
● Configure the required VLAN on the switch for the PCIe adapter ports that are planned for discovery and management.
● When using the PCIe only adapter, disable the NDC or OCP ports. To avoid network interruptions, use DCUI to log in to the
iDRAC console and configure the NDC or OCP ports.
About this task
Use the PCIe adapters only if NDC adapters are not used for VxRail management and discovery. Adjust the PCIe adapter
configuration before starting the VxRail initialization.
This procedure applies for VxRail clusters that are running a VxRail 7.0.130 or later and VxRail 8.0.x or later. See the VxRail 7.0
Support Matrix or VxRail 8.0.x Support Matrix for a list of supported versions.
This procedure is intended for Dell Technologies customers, employees, and partners who are authorized to work on a VxRail
cluster.
Steps
1. Log in to the iDRAC console as root.
2. Press Alt-F1 to switch to the CLI mode.
3. To verify the status and locate the VMNICs are from PCIe, enter:
esxcfg-nics -l
Check the PCI column to identify different PCIe adapters.
4. To view the current NIC teaming policy of a vSwitch, enter:
esxcli network vswitch standard policy failover get -v vSwitch0
5. Select one of the PCIe ports and add the PCIe VMNIC into the default VMware vSwitch0.
2-port NDC and 2-port PCIe adapters are used in the VxRail E560F model. VMNIC 2 and VMNIC 3 are the ports that are
planned to use from PCIe adapters.
● Identify the PCIe NIC to configure the active and standby uplinks.
● Identify the NDC or OCP NICs to be removed from the VMware vSwitch.
To configure the VxRail node before deployment, one port from the PCIe adapter is required.
Manage network settings
21
6. To add one PCIe VMNIC into the port groups, enter:
esxcli network vswitch standard uplink add -u vmnic2 -v vSwitch0
esxcli network vswitch standard policy failover set -v vSwitch0 -a vmnic2
esxcli network vswitch standard portgroup policy failover set -p "Management Network" -a
vmnic2
esxcli network vswitch standard portgroup policy failover set -p "Private Management
Network" -a vmnic2
esxcli network vswitch standard portgroup policy failover set -p "VM Network" -a vmnic2
esxcli network vswitch standard portgroup policy failover set -p "Private VM Network" -a
vmnic2
7. To add an additional PCIe NICs for the VxRail networking as a standby uplink, enter:
esxcli network vswitch standard policy failover set -v vSwitch0 -s vmnic2
8. After the nodes are configured, ping the VxRail management IP address. Perform one of the following to start the
deployment.
● For the VMware vCenter Server UI, perform the following:
○ In VDS Settings step, select Custom or VDS configuration.
○ In the Uplink Definition checklist, select two PCIe adapter ports and complete the VxRail deployment.
● If you are using the API to perform the initialization, only ADVANCED_VXRAIL_SUPPLIED_VDS and
ADVANCED_CUSTOMER_SUPPLIED_VDS NIC profiles are supported.
9. To expand the VxRail cluster host, perform the following:
a. Complete all the procedures on the new node.
b. Perform the node expansion using the VMware vCenter Server UI or API.
10. To expand the VxRail satellite host, perform the following:
a. Ensure that there are two adjacent PCIe adapter ports with the same network speed that is greater than or equal to one
GB per second.
b. Remove unused ports from the vSwitch0 and add the PCIe adapter ports. For example, to remove the VMNIC0 and
VMNIC1 from vSwitch0, enter:
esxcli network vswitch standard uplink remove -u vmnic0 -v vSwitch0
esxcli network vswitch standard uplink remove -u vmnic1 -v vSwitch0
c. Verify that at least one PCIe adapter port is Active and the other is Standby. For example, to add VMNIC2 to vSwitch0
and configure it as an Active PCIe adapter port, enter:
esxcli network vswitch standard uplink add -u vmnic2 -v vSwitch0
esxcli network vswitch standard policy failover set -v vSwitch0 -a vmnic2
esxcli network vswitch standard portgroup policy failover set -p
"Management Network" -a vmnic2
esxcli network vswitch standard portgroup policy failover set -p
"Private Management Network" -a vmnic2
esxcli network vswitch standard portgroup policy failover set -p
"VM Network" -a vmnic2
esxcli network vswitch standard portgroup policy failover set -p
"Private VM Network" -a vmnic2
For example, to add VMNIC3 to vSwitch0 and configure it as a Standby PCI adapter port, enter:
esxcli network vswitch standard uplink add -u vmnic3 -v vSwitch0
esxcli network vswitch standard policy failover set -v vSwitch0 -s vmnic3
22
Manage network settings
esxcli network vswitch standard portgroup policy failover set -p
"Management Network" -s vmnic3
esxcli network vswitch standard portgroup policy failover set -p
"Private Management Network" -s vmnic3
esxcli network vswitch standard portgroup policy failover set -p
"VM Network" -s vmnic3
esxcli network vswitch standard portgroup policy failover set -p
"Private VM Network" -s vmnic3
d. Use the VMware vCenter Server wizard or API to expand the node.
NOTE: The VxRail physical view page does not display the PCIe adapter information.
See Configure Physical Network Adapters on a VMware VDS for more information.
Configure host affinity for the VxRail Layer 3
management network
Create the host affinity rule before Layer 3 management network segment node expansion. You must configure the host affinity
groups to prevent the wrong migration of the core system VMs.
About this task
The VxRail cluster supports node expansion using nodes in different Layer 3 segments from the VxRail version 8.0.x and later.
With multi segments, the VxRail cluster management network may have multiple subnets. For the initial segment, assign the IP
addresses in the subnet for the core system VMs including the VMware vCenter Server and VxRail Manager.
CAUTION: If core system VMs are migrated to the other Layer 3 management network segment, you may
experience connection and service loss.
This procedure is intended for Dell Technologies employees and partners who are authorized to work on a VxRail cluster. This
procedure is not available to customers.
Steps
1. To set the core system host affinity group, perform the following:
a.
b.
c.
d.
Log in to the VMware vCenter Server Web Client as an administrator.
Select VxRail Cluster.
From the Configure tab, select Configuration > VM/Host Groups.
Click Add to create a VM group including all the VxRail service VMs and VxRail Manager.
For VMware vRealize Log insight and remote support connectivity, add these VMs into the same group and click OK.
e. Create a host group including all the first segment nodes.
f. Select the VM/Host Rules and add a VM or host rule.
g. From the Type drop-down menu, select Virtual Machines to Hosts.
h. Select the related VM Group and Host Group and click OK.
The VM or host rule is enabled. If any new core system is deployed or a node is expanded in the first segment, add the
new VM or host.
2. Validate the host affinity rule. If you try to migrate the core VM to another Layer 3 segment, the validation stops with a
violation alert.
See VM-Host Affinity Rules for more information.
Manage network settings
23
Configure jumbo frames
VxRail supports jumbo frames on VMware vSAN, management, VMware vSphere vMotion, iSCSI, and NFS traffic types.
Prerequisites
● Verify that the VxRail cluster is healthy and all nodes are running.
● On the Windows client, install the following:
○ PowerShell 5.1.14409.1005
○ Posh-SSH 2.0.2 for PowerShell
○ VMware.PowerCLI 12.2.0 build 17538434 for PowerShell
● Download the enablejumboframe_movevc_70100.ps1 script.
● When you enable the jumbo frames on the VMware VDS, uplinks are cycled up and down for approximately 20-40 seconds.
For critical applications, shut down and power on all the user VMs.
● The scripts power off and power on the VxRail Manager and the user VMs. If some VM services prevent the VM from
shutting down, manually shut down the VM. If the script fails after you power off the VMs, power on the VMs and retry.
● Do not power off the VxRail-managed VMware vCenter Server.
● If connectivity to the VMware vCenter Server fails due to a certificate error, enter:
C:\Users\stshell\Downloads>Set-PowerCLIConfiguration -InvalidCertificateAction Ignore
Perform operation?
Performing operation 'Update PowerCLI configuration.'?
[Y] Yes [A] Yes to All [N] No [L] No to All [S] Suspend [?] Help (default is "Y"): Y
Scope
ProxyPolicy DefaultVIServerMode InvalidCertificateAction
DisplayDeprecationWarnings WebOperationTimeoutSeconds
--------------- ------------------- ------------------------------------------------- -------------------------Session UseSystem
Multiple
Ignore
True
300
Proxy
User
Ignore
AllUsers
● Set the security protocol to Tls12 by entering:
C:\Users\stshell\Downloads>[Net.ServicePointManager]::SecurityProtocol =
[Net.SecurityProtocolType]::Tls12
● On the physical switch, set the MTU value to 9216 for any switch ports in the VxRail network.
About this task
The MTU setting on the physical switch must be larger than the virtual switch to accommodate the packet header and footer
overhead. The maximum MTU value depends on the physical switch limitation. The VMware ESXi supports the MTU size of up
to 9000 bytes. MTU can be any number greater than 1500.
● To enable jumbo frames on the VMware VDS, see Jumbo Frames.
● To disable the network rollback, see vSphere Networking Rollback - Disable Network Rollback.
This procedure applies to the VxRail cluster running the VxRail version 8.0.x and later. See the VxRail 8.0.x Support Matrix for a
list of supported versions.
This procedure is intended for Dell Technologies customers, employees, and partners who are authorized to work on a VxRail
cluster.
Steps
1. To enable jumbo frames for the VMware vCenter Server, perform the following:
a. Enter enablejumboframe_movevc_70100.ps1 with the following parameters:
24
Command
parameter
vCenterServer
<vcenter_ipaddress>
Manage network settings
Command
parameter
vcUser
<vcenter_username>
vcPwd
<vcenter _password>
vxVDS
<vds_name>
vxCluster
<cluster_name>
MTU
<size>
Optional: Enter the MTU size. The MTU value range is 1280–9000 bytes.
validIP
<ip_address>
Use the IP address from the vmkping for the jumbo frame validation.
skipValid
If skipValid is selected, ignore the validIP.
vcNotInCluster
If used, VMware vCenter Server is not a VM in the selected cluster.
retryTimes
<retry_times>
To retry the failed steps in the script, the minimum value is 3.
VMK
<vmk interface>
The source VMkernel interface that the vmkping uses to test the jumbo frames.
The default value is vmk2
vxmIP
<vxrail_mgr_ipaddr>
VxRail Manager VM skips the power off. When your cluster uses internal DNS, this
field is required.
Examples:
● Internal VMware vCenter Server with external DNS (VMware vCenter Server is a VM in the VxRail Cluster):
.\enablejumboframe_movevc_70100.ps1 -MTU 9000 -vCenterServer 192.168.101.201
-vcUser "administrator@vsphere.local" -vcPwd "Testvxrail123!" -vxVDS
"VMware HCIA Distributed Switch" -vxCluster "VxRail-Virtual-SAN-Clusterd5fff3cd-49dc-4230-8aa1-071050aa4fc0" -validIP 192.168.101.211 -retryTimes 5
● Internal VMware vCenter Server with internal DNS (VMware vCenterServer is a VM in the VxRail Cluster):
.\enablejumboframe_movevc_70100.ps1 -MTU 9000 -vCenterServer 192.168.101.201
-vcUser "administrator@vsphere.local" -vcPwd "Testvxrail123!" -vxVDS
"VMware HCIA Distributed Switch" -vxCluster "VxRail-Virtual-SAN-Clusterd5fff3cd-49dc-4230-8aa1-071050aa4fc0" -vxmIP 192.168.101.200 -validIP
192.168.101.211 -retryTimes 5
● External VMware vCenter Server with external DNS (VMware vCenter Server is not in the VxRail Cluster):
.\enablejumboframe_movevc_70100.ps1 -skipValid -MTU 9000 -vCenterServer
192.168.101.201 -vcUser "administrator@vsphere.local" -vcPwd "Testvxrail123!"
-vxVDS "VMware HCIA Distributed Switch" -vxCluster "VxRail-Virtual-SAN-Clusterd5fff3cd-49dc-4230-8aa1-071050aa4fc0" -vcNotInCluster
2. To enable the jumbo frame to add or replace a node, perform the following:
a. When you add a node to the cluster with the jumbo frame is enabled, select Put Hosts in Maintenance Mode.
b. Run enablejumboframe_movevc_70100.ps1 with the following parameters:
vCenterServer
<vcenter_ipaddress>
vcUser
<vcenter_username>
vcPwd
<vcenter _password>
vxVDS
<vds_name>
vxCluster
<cluster_name>
Manage network settings
25
hostMode
<host_mode>
addHostName
<name>
MTU
<MTU_size>
Optional: the MTU value range is 1280–9000 bytes.
validIP
<ip_address>
Use the IP address from the vmkping for the jumbo frame validation.
skipValid
If skipValid is selected, ignore the validIP.
vcNotInCluster
If used, VMware vCenter Server is not a VM in the selected cluster.
retryTimes
<retry_times>
To retry the failed steps in the script, the minimum value is 3.
VMK
<vmk interface>
The source VMkernel interface that the vmkping uses to test the jumbo
frames. The default value is vmk2
vxmIP
<vxrail_mgr_ipaddr>
VxRail Manager VM skips the power off. When your cluster uses internal
DNS, this field is required.
Example:
.\enablejumboframe_movevc_70100.ps1 -skipValid -MTU 9000 -vCenterServer
192.168.101.201 -vcUser "administrator@vsphere.local" -vcPwd "Testvxrail123!"
-vxVDS "VMware HCIA Distributed Switch" -vxCluster "VxRail-Virtual-SANCluster-d5fff3cd-49dc-4230-8aa1-071050aa4fc0" -vcNotInCluster -hostMode -addHostName
"engdell1-01.localdomain.local"
c. After the node is added, exit Maintenance Mode.
If you configure the cluster with the internal DNS, the VMware vCenter Server temporarily loses connectivity to the hosts
after restarting the VxRail Manager.
Power on the VxRail Manager VM if it is not powered on automatically after the procedure.
Configure the proxy node for new segment node
expansion
Configure the proxy node for new segment node expansion. Starting with the VxRail version 8.0, VxRail supports node
expansion using nodes in Layer 2 or Layer 3 segments.
Prerequisites
1. VxRail version 8.0 cluster is initialized in the first segment. Select the new node from the expansion track that is not in the
first segment and manually configure the management network on that node which acts as a proxy node. Routing must be in
place between the subnet that is assigned to the first segment and the new segment.
● IP address: Configure the IP address that is in the management IP address pool for the new segment. You must use the
same IP address that is planned to set as the management IP address.
● Network gateway: Configure the management network gateway address for the new segment.
● VLAN ID: Configure the VLAN ID that is assigned to the management traffic.
2. Using the VMware ESXi CLI or SSH, configure the IP address on VMK2.
26
Manage network settings
About this task
For a new segment node expansion, VxRail uses the proxy node configuration. You must select one of the new nodes in the new
segment and set a planned management IP address as the proxy node management IP address. Assigning an IP address in the
first segment of the VxRail Manager must connect to the proxy node in the new segment.
The VMware vCenter Server or VxRail Manager can access the proxy node to continue the node expansion. Each new segment
requires one proxy node.
This procedure applies to the VxRail cluster running the VxRail version 8.0.x and later.
This procedure is intended for Dell Technologies employees and partners who are authorized to work on a VxRail cluster. This
procedure is not available to customers.
Steps
1. To set the proxy node IP address, perform the following:
a.
b.
c.
d.
e.
f.
Log in to the iDRAC console.
Press F2 and log in to the DCUI as root.
Go to Troubleshooting Options.
Select Enable ESXi Shell.
From the virtual keyboard, click Alt-F1 to switch to the console shell.
Log in to the VMware ESXi console as root.
The default password is Passw0rd!.
g. To apply the IP address on the VMK2 for proxy node, enter:
esxcli network ip interface ipv4 set -i vmk2 -I
<management_ip_address> -N
<subnet_mask> -t static -g
gateway
esxcli network ip route ipv4 add -g <gateway> -n 0.0.0.0
By default, the management traffic is untagged and the VLAN ID is 0.
h. If using a tagged VLAN for the new segment management network on the switch, you must set the VLAN on the node.
To configure VLAN, enter:
esxcli network vswitch standard portgroup set -p "Management Network" -v
<vlan_id>
esxcli network vswitch standard portgroup set -p "VM Network" -v <vlan_id>
i.
If the VLAN is untagged, skip this step.
Press Alt-F2 to return to the DCUI menu and click ESC twice.
2. To validate the proxy node IP connection, ping the proxy node IP address from the VMware vCenter Server or VxRail
Manager and verify the network connection.
Convert VxRail-managed VMware VDS to customer
managed on a customer-managed VMware vCenter
Server
Prerequisites
Obtain access to the customer-managed VMware vCenter Server and VxRail Manager.
Before you begin the conversion, take a snapshot of all the service VMs:
1. Log in to the VMware vSphere Web Client as an administrator.
2. Select the Inventory icon.
3. Right-click VxRail Manager and select Snapshots > Take Snapshot.
4. Enter a name and click OK.
5. Repeat these steps for the remaining service VMs.
Manage network settings
27
About this task
This procedure applies to the VxRail cluster running VxRail version 7.0.450 or 8.0.x or later.
This procedure is intended for Dell Technologies customers, employees, and partners who are authorized to work on a VxRail
cluster.
See the Dell VxRail 7.0 Support Matrix or VxRail 8.0.x Support Matrix for a list of supported versions.
This procedure is not supported in a VCF environment.
CAUTION: Do not perform this task in a VCF environment.
Steps
1. Using SSH, log in to VxRail Manager as mystic.
2. To connect to the database, enter:
psql -U postgres vxrail
3. To view the VMware VDS status in the database, enter:
select * from configuration.configuration where key='customer_supplied_vds';
id | category |
key
| value
----+-----------+------------------------+------84 | setting
| customer_supplied_vds | false
(1 row)
Optional: If the above query returns null for the customer-managed VMware VDS, to add a row, enter:
INSERT INTO configuration.configuration (category,key,value)
VALUES ('setting','customer_supplied_vds','true');
4. To convert to a customer-managed VMware VDS, in the VMware VDS field, enter:
update configuration.configuration set value='true' where key='customer_supplied_vds';
5. To confirm the status of the VMware VDS, enter:
select * from configuration.configuration where key='customer_supplied_vds';
id | category |
key
| value
----+-----------+------------------------+------84 | setting
| customer_supplied_vds | true
(1 row)
6. To exit the database, enter: \q
7. To migrate the VMware VDS to two VMware VDS, see Convert one VMware VDS to two VMware VDS.
NOTE: This is an optional step.
Enable a VxRail node to support the PCIE adapter port
without an NDC connection
VxRail v7.0.130 and later supports advanced NIC definition to use NIC with flexible configurations. How to configure the code
for VxRail initialization and node expansion to use PCIE adapter and the steps that you must follow to modify the PCIE adapter
configuration are provided.
Prerequisites
● Standard cluster deployment running VxRail version 7.0.130 or later.
● NIC profiles in API: ADVANCED_VXRAIL_SUPPLIED_VDS and ADVANCED_CUSTOMER_SUPPLIED_VDS.
● The new node must have enough spared PCIE NICs for configuration.
28
Manage network settings
● You must configure the required VLAN on switch for the PCIE adapter ports which are planned for Discovery and
management.
● When using pure PCIE adapter, the NDC ports should not be in connected or active state. To avoid network interruption,
configure NDC ports using DCUI through IDRAC console.
About this task
Use PCIE adapters only if no NDC adapters are used for VxRail management and discovery. You must adjust PCIE adapter
configuration before starting the VxRail initialization procedure.
This procedure is intended for customers, Dell Service providers who are authorized to work on VxRail clusters, and VxRail
administrators. VxRail 7.0.130 or later cluster managed by either a VxRail-managed VMware vCenter Server or a customermanaged VMware vCenter server.
Steps
1. Log in to the node IDRAC interface and open the console.
2. Press Alt+F1 to check into CLI mode.
3. Log in to the CLI as root.
4. To check the VMNIC status and locate the VMNIC from PCIE, enter: esxcfg-nics -l
Check PCI column to identify different PCIE adapter in the result.
5. Select one of PCIE ports and add it into vSwitch0 as shown in the next section.
6. To configure and add PCIE VMNIC into default vSwitch0 configure and add PCIE adapters on VxRail 560F, perform the
following:
In the following example, we used 2-port NDC and 2-port PCIE adapters on VxRail E560F. Vmnic2 and vmnic3 are the ports
that we planned to use from PCIE adapters. Only one port from PCIE adapter is required to be configured before VxRail
deployment.
a. Enter the following commands:
esxcli network vswitch standard uplink add -u vmnic2 -v vSwitch0
esxcli network vswitch standard policy failover set -v vSwitch0 -a vmnic2
esxcli network vswitch standard portgroup policy failover set -p "Management Network"
-a vmnic2
esxcli network vswitch standard portgroup policy failover set -p "Private Management
Network" -a vmnic2
esxcli network vswitch standard portgroup policy failover set -p "VM Network" -a
vmnic2
esxcli network vswitch standard portgroup policy failover set -p "Private VM Network"
-a vmnic2
b. Repeat the above step on all nodes planned for deployment.
After all nodes are configured, ping the VxRail management IP address and go to the UI or API to start deployment.
7. After VxRail initialization is complete, perform the following:
a. In the VMware VDS configuration setting, select Custom.
b. In uplink definition checklist, select the proper PCIE adapter port and complete VxRail Deployment Wizard.
If you are using the API to perform the initialization, only ADAVANCE_VXRAIL_SUPPLIED_VDS and
ADVANCED_CUSTOMER_SUPPLIED_VDS nic_profiles are supported.
8. For VxRail node expansion, perform the following:
a. Expand the cluster host and complete the entire procedure for new node.
b. Perform the node expansion using the Wizard or API.
c. To expand the satellite node, ensure that there are at least two adjacent PCIE ports with the same network speed and
greater than or equal to 1 Gb/s.
d. Remove unused ports from vSwitch0 and add PCIE ports. At least one adapter is active, and one adapter is standby.
For example, to remove vmnic0 and vmnic1 from vSwitch0, enter the following:
esxcli network vswitch standard uplink remove -u vmnic0 -v vSwitch0
esxcli network vswitch standard uplink remove -u vmnic1 -v vSwitch0
e. To add vmnic2 to vSwitch0, and configure it as an active adapter, enter:
Manage network settings
29
esxcli network vswitch standard uplink add -u vmnic2 -v vSwitch0
esxcli network vswitch standard policy failover set -v vSwitch0 -a vmnic2
esxcli network vswitch standard portgroup policy failover set -p "Management Network"
-a vmnic2
esxcli network vswitch standard portgroup policy failover set -p "Private Management
Network" -a vmnic2
esxcli network vswitch standard portgroup policy failover set -p "VM Network" -a
vmnic2
esxcli network vswitch standard portgroup policy failover set -p "Private VM Network"
-a vmnic2
f. To add vmnic3 to vSwitch0, and configure it as a standby adapter, enter:
esxcli network vswitch standard uplink add -u vmnic3 -v vSwitch0
esxcli network vswitch standard policy failover set -v vSwitch0 -s vmnic3
esxcli network vswitch standard portgroup policy failover set -p "Management Network"
-s vmnic3
esxcli network vswitch standard portgroup policy failover set -p "Private Management
Network" -s vmnic3
esxcli network vswitch standard portgroup policy failover set -p "VM Network" -s
vmnic3
esxcli network vswitch standard portgroup policy failover set -p "Private VM Network"
-s vmnic3
9. Perform the node expansion using the UI Wizard or API. Known issue: The VxRail Physical View page does not display PCIE
adapter information. See Configure Physical Network Adapters on a vSphere Distributed Switch for more information.
Enable dynamic LAG for two ports on a VxRail
network
Enable dynamic LAG on a VxRail network running the VxRail 7.0.450 or 8.0.x or later versions.
This procedure is intended for Dell Technologies customers, employees, and partners who are authorized to work on a VxRail
cluster.
Verify the VxRail version on the cluster
The VxRail cluster must be running VxRail 7.0.450 or 8.0.x or later versions to enable LAG.
About this task
Verify that the firmware on the Dell switch is later than 10.5.3.0 and set each port with LACP individual function. For a non-Dell
switch, check each port with LACP individual function.
Steps
1. Connect to the VMware vCenter Server instance that supports the VxRail cluster.
2. From the VMware vSphere Web Client, click the Inventory icon.
3. Select a VxRail cluster, and click the Configure tab.
4. Expand VxRail and click System.
30
Manage network settings
Verify the health state of the VxRail cluster
Verify that the VxRail cluster is healthy.
About this task
CAUTION: If the VxRail cluster is not healthy, you cannot enable dynamic LAG.
Steps
1. From the VMware vSphere Web Client, click the Inventory icon.
2. Select a VxRail cluster, and click the Monitor tab.
3. Expand VxRail, and click Physical View.
4. Verify that the VxRail cluster status is Healthy.
Verify the VMware VDS health status
The VMware VDS must be in a healthy state.
About this task
CAUTION: You cannot enable LAG if the VxRail cluster is in an unhealthy state.
Steps
1. From the VMware vSphere Web Client, select the Networking icon.
2. Select the VMware VDS that supports the VxRail cluster network, and click the Configure tab.
3. Expand Settings, and click Health Check.
4. To enable or disable health check, click Edit.
5. In the Edit Health Check Settings window, do the following:
a.
b.
c.
d.
e.
Under VLAN and MTU, from the State menu, select Enabled.
In the Interval box, enter the interval for the VLAN and MTU health check. The default value is 1 minute.
Under Teaming and failover, from the State menu, select Enabled.
In the Interval box, enter the interval for the Teaming and failover health check. The default value is 1 minute.
Click OK.
6. Click the Monitor tab, and click Health.
7. Confirm that the VMware VDS switch is in a healthy state.
8. Disable the VMware VDS health service.
Health check can detect incorrect network conditions when LAG is enabled.
Verify the VMware VDS uplinks
Verify that the minimum number of uplinks are assigned to support VxRail.
About this task
This procedure applies to all port group transfers to LAG traffic. If you have multiple ports or NICs, reallocate some port groups
to LAG traffic. Other port groups remain uplinks.
The following minimum uplinks are required for a VxRail cluster configuration:
● For one VMware VDS, two uplinks are required.
● For two VMware VDS, two uplinks per VMware VDS are required.
CAUTION: Do not proceed with this task unless the required minimum uplinks are assigned to support the VxRail
network.
Manage network settings
31
Steps
1. From the VMware vSphere Web Client, click the Networking icon.
2. Right-click the VMware VDS that supports the VxRail cluster network.
3. Select Settings > Edit Settings.
4. On the Edit Settings window, select the Uplinks tab.
Confirm isolation of the VxRail port group
Confirm the VxRail port groups targeted for LAG.
About this task
LAG is supported on all networks. You can apply LAG on either or both networks.
32
Manage network settings
Steps
1. From the VMware vSphere Web Client, click the Networking icon.
2. Select the VMware VDS switch that supports the VxRail cluster network, and click the Networks tab.
3. Under the Distributed Port Groups tab, select the port group.
4. Right-click the selected port group, and then click Edit Settings.
5. In the Distributed Port Group - Edit Settings page, click Teaming and failover.
6. Select the two uplinks that are assigned to the port group.
7. Open each port group that represents the management networks (management, VxRail management, and VMware vCenter
Server).
Identify the NICs for LAG
Identify the NICs that are targeted for LAG.
About this task
If you have already identified the switch ports that support LAG, you can skip this task.
Steps
1. From the VMware vSphere Web Client, select the VMware VDS that is targeted for LAG.
2. Click the Configure tab.
Manage network settings
33
3. Expand Settings and click Topology.
4. Expand the two uplinks that support the VxRail networks.
5. Locate the VMNICs that are assigned to each uplink.
Identify assignment of the NICs to node ports
Identify assignment of the NICs to node ports.
Steps
1. Open a browser to the iDRAC console on one of the nodes.
2. Log in as root.
3. Open a virtual console session to the VxRail node.
4. Select Keyboard from the top toolbar, and click F2.
5. Log in to the VMware ESXi operating system as root.
6. Go to Troubleshooting Options, and select Enable ESXi Shell.
7. On the virtual keyboard, click Alt-F1.
8. Log in to the VMware ESXi console as root.
9. To obtain the MAC address and description to identify each VMNIC, enter:
esxcli network nic list
Identify the switch ports that are targeted for LAG using iDRAC
If the ToR switches do not support LLDP discovery, use iDRAC to identify the switch port connection.
Prerequisites
Verify that you have connectivity to the iDRAC on each VxRail node.
Steps
1. Log in to the iDRAC on a VxRail node as root.
2. Select the System view.
3. From the Overview tab, click Network Devices to view the NDC and PCIe adapter cards.
4. To view the switch port assignment for each NDC port and any of the unused PCIe based ports, perform the following:
a. Select Integrated NIC to view the NDC-OCPport properties.
b. Select NIC Slot to view the PCIe based port properties.
c. Select Summary.
The Switch Port Connection ID column identifies the switch port connection. The MAC address under Switch
Connection ID for each view differs, indicating that each port is connected to a different switch.
5. Repeat the iDRAC query for each of VxRail node to discover the switch port connections.
34
Manage network settings
Prepare the switches for multichassis LAG
To enable multichassis link aggregation across a pair of switches, configure VLT between the switches. VLT supports the
aggregation of the ports terminating on separate switches.
Prerequisites
Verify that the ToR switches that support the VxRail cluster also support VLT.
About this task
For Dell operating system 10, VLT configures a logical connection to enable LAG across a pair of switches. The command syntax
that is shown in this task is based on Dell operating system 10. The command differs from model to model and vendor to vendor.
See your switch vendor documentation or contact your technical support team for more information. For the Dell switch,
confirm that the firmware is greater than 10.5.3.0 and set each port with LACP individual function. For a non-Dell switch, check
each port with LACP individual function.
Steps
1. Connect the Ethernet cables between one or two pairs of ports on each switch.
2. For a multichassis LAG, configure a VLT trunk between the switches.
3. To view the configuration on each switch, enter:
show running-configuration vlt
!
vlt-domain 255
backup destination 172.17.186.204
discovery-interface ethernet 1/1/29-1/1/30
peer-routing
vlt-mac 59:9a:4c:da:5d:30
4. Configure a port channel to support LAG with the node ports.
●
●
●
●
●
A port channel is configured for each node in the VxRail cluster.
For a multichassis link aggregation, port channels are configured on both switches.
For a multichassis link aggregation, the port channel ID values must match on both switches.
Define the VLAN or VLANs for the VxRail networks that are targeted for link aggregation.
For each port channel, LACP individual function is enabled.
To view the configuration on a port channel, enter: show running-configuration interface port-channel
100
interface port-channel100
description "Node2 VPC"
no shutdown
switchport mode trunk
switchport trunk allowed vlan 202
mtu 9216
vlt-port-channel 100
lacp individual
5. (Optional) If the STP was enabled in the network, set the port channel to STP portfast mode to avoid temporary network
loss during STP convergence. The command to set STP to portfast depends on the model of the switch. As the command
is different from model to model and vendor to vendor, contact your physical switch vendor for more detailed information
about how to configure. For example:
Cisco switch:
● spanning-tree portfast (for an access port)
● spanning-tree portfast trunk (for a trunk port)
Dell switch:
Manage network settings
35
● spanning-tree port type edge (for an access port)
● spanning-tree port type edge trunk (for a trunk port)
Configure the first switch for LAG
The port on the switch connecting the VMNIC moved to the LACP policy is added to the port channel. In this example, move
the VMNIC1 to the LAG and then move the LAG into the port channel for each node.
Steps
1. Open a console to the ToR switches.
2. To confirm the switch port for the VMNIC connection using LLDP, enter:
show lldp neighbors | grep <vmnic>
ethernet1/1/3
ethernet1/1/6
ethernet1/1/9
ethernet1/1/12
crkm01esx03.crk.v...
crkm01esx04.crk.v...
crkm01esx01.crk.v...
crkm01esx02.crk.v...
b8:59:9f:58:44:a5
b8:59:9f:58:45:55
b8:59:9f:58:49:7d
b8:59:9f:58:49:dd
vmnic1
vmnic1
vmnic1
vmnic1
3. To configure the switch interface and set the channel group to Active, enter:
interface ethernet 1/1/9
channel-group 101 mode active
4. Repeat these steps for each switch interface that is configured into the LACP policy.
Configure the second ToR switch for LAG
After you move VMNIC to LAG on the VMware VDS, the switch interface that is connected to the VMNIC is added to the port
channel. Move the second VMNIC into the port channel for each node. Migrate the second switch interface that supports the
VMware vSAN or VMware vSphere vMotion to a port channel.
Steps
1. Open a console session to the second ToR switch.
2. To confirm the VMNIC that is connected to LLDP, enter:
show lldp neighbors | grep <vmnic>
26-II-TOR-A# show
ethernet1/1/1
ethernet1/1/5
ethernet1/1/7
ethernet1/1/10
lldp neighbors | grep crkm01 | grep vmnic2
crkm01esx03.crk.v...
04:3f:72:c3:77:78
crkm01esx04.crk.v...
04:3f:72:c3:77:7c
crkm01esx01.crk.v...
04:3f:72:c3:77:28
crkm01esx02.crk.v...
04:3f:72:c2:09:2c
3. To configure the switch interface, enter:
26-II-TOR-A(config)# interface ethernet 1/1/7
4. To set the channel group to active, enter:
26-II-TOR-A(conf-if-eth1/1/17# channel-group 101 mode active
5. For the remaining interfaces, set the channel group to active.
36
Manage network settings
vmnic2
vmnic2
vmnic2
vmnic2
Identify the load-balancing policy on the switches
The command syntax that is shown in this task is based on Dell Operating System 10. The command differs from model to model
and vendor to vendor. See your switch vendor documentation or contact your technical support team for more information.
Steps
1. To view the load-balancing policies set on the switch, enter:
show load-balance
Load-Balancing Configuration For LAG and ECMP:
---------------------------------------------IPV4 Load Balancing
: Enabled
IPV6 Load Blaancing
: Enabled
MAC Load Balancing
: Enabled
TCP-UDP Load Balancing
: Enabled
Ingress Port Load Balancing
: Disabled
IPV4 FIELDS
: source-ip destination-ip protocol vlan-id l4-destination-port
l4-source-port
IPV6 FIELDS
: source-ip destination-ip protocol vlan-id l4-destination-port
l4-source-port
MAC FIELDS
: source-mac destination-mac ethertype vlan-id
TCP-UDP FIELDS
: l4-destination-port l4-source-port
2. Verify that the load-balancing policy on the switches align with the load-balancing policy that is to be configured on the
VxRail network.
Configure the LACP policy on the VxRail VDS
Configure the LACP policy on the VxRail VDS.
Steps
1. From the VMware vSphere Web Client, click the Networking icon.
2. Select the VMware VDS on which you want to configure the LACP policy, and click the Configure tab.
3. Expand Settings and click LACP.
4. Under MIGRATING NETWORK TRAFFIC TO LAGS, click NEW.
5. In the New Link Aggregation Group window, enter the following:
● Name: <name>
● Number of ports: 2
● Mode
○ Active: Initiate negotiation with the remote ports by sending the LACP packets. If the LAGs on the physical switch
are in Active mode, set the LACP policy mode to either Active or Passive.
○ Passive: Responds to the LACP packet that it receives but does not initiate LACP negotiation. If the LAGs on the
physical switch are in Passive mode, set the LACP policy mode to Active.
● Load balancing mode: Select the load-balancing algorithm that aligns with the ToR switch settings, and click OK.
6. Click Topology.
7. Verify that the LACP policy is listed for the uplink selection.
Manage network settings
37
Verify the port flags
Verify that the port flag is set to individual on each switch.
Steps
1. To check the flag setting on the switch, enter:
show port-channel summary
2. Verify that two (IND) is displayed next to each of the ports.
Migrate the uplink to a LAG port
Assign one of the standby VMNICs to LAG ports. Verify that the LAG ports peers with the switch ports.
Steps
1. Right-click the VMware VDS that is targeted for LAG, and click Add and Manage Hosts.
2. On the Select task page, select Manage host networking and click NEXT .
3. On the Select hosts page, select all the member hosts and click NEXT.
38
Manage network settings
4. On the Manage physical adapters page, select one VMNIC to LAG on each host and click NEXT.
5. Skip the Manage VMkernel adapters and Migrate VM networking pages.
6. On the Ready to complete page, review the uplink reassignment and click FINISH.
Migrate the LACP policy to the standby uplink
Migrate the LACP policy to the standby uplink on the target port group.
Steps
1. Right-click the VMware VDS on which you want to migrate the LACP policy to the standby uplink .
2. Select Distributed Port Group > Manage Distributed Port Groups.
3. On the Select port group policies page, select Teaming and failover, and then click Next.
4. On the Select port groups page, select a single port group or two port groups (VMware vSAN or VMware vSphere
vMotion) to assign for the LACP policy, and click Next.
5. On the Teaming and failover page, under Failover order section, use the UP and DOWN arrows to migrate between the
uplinks.
a. Migrate the LACP policy to Active uplinks.
b. Migrate the remaining uplinks to Unused uplinks.
c. Repeat steps a and b for all port groups.
Manage network settings
39
6. On the Ready to complete page, review the changes, and click FINISH.
7. A warning message displays while migrating the physical adapters. Click OK to discard the issues and proceed or Cancel to
review your changes.
8. Verify that one of the ports is connect to LAG. Yellow connections in the following example indicate that connections are
applied to all port groups.
9. To view the status of the switch, enter:
show port-channel summary
10. Verify that (IND) and (P) are displayed next to each of the ports.
40
Manage network settings
Move the second VMNIC to LAG
Migrate the second VMNIC that supports all the port groups to LAG.
Steps
1. Right-click the VMware VDS, and click Add and Manage Hosts.
2. On the Select task page, select Manage host networking and click NEXT.
3. On the Select hosts page, under Member hosts, select all the hosts in the VxRail cluster and click NEXT.
4. On the Manage physical adapters page, perform the following:
● For other uplinks transferred to LAG, select the VMNIC associated with the uplink and select lag1-1 in topology that has
all traffic with port groups.
● Replace the vmnic2 which still use the original uplink with unassigned LAG (ex, lag1-0).
5. Skip the remaining screens and click Finish.
6. To verify the switch status, enter: show port-channel summary
7. Verify that all connections are migrated to LAG.
Manage network settings
41
NOTE: vmnic1 and vmnic5 support the network that is targeted for link aggregation. They were unassigned from uplink2
and uplink4 and reassigned to the two ports attached to the LACP policy.
8. Skip the rest of the screens and click FINISH.
Verify LAG connectivity on VxRail nodes
Verify the LACP connectivity on the VMware VDS.
Steps
1. Open a VMware ESXi console session to a VxRail node.
2. To verify the LACP counters on the VMware ESXi console, enter:
esxcli network vswitch dvs vmware lacp stats get
DVSwitch
------------------crk-m01-c01-vds01
LAGID
NIC
Rx Errors
---------- -------- ---------3247427758 vmnic1
0
Rx LACPDUs
---------21
Tx Errors
--------0
3. Repeat this procedure on the other VxRail nodes to validate the LACP status.
Verify that LAG is configured in the VMware VDS
Verify that LAG is active on the VMware VDS port groups.
Prerequisites
Configure the LAG for the VMNIC on all the VxRail nodes.
Steps
1. From the VMware vSphere Web Client, select the VMware VDS that is targeted for LAG.
2. Select the Configure tab and click Topology.
3. Select the LAG and verify that the specified VMNIC is assigned to the uplink against the LAG.
42
Manage network settings
Tx LACPDUs
---------89
Enable dynamic LAG for four ports on a VxRail
network
Enable dynamic LAG on a VxRail network running the VxRail 7.0.450 or 8.0.x or later versions.
This procedure is intended for Dell Technologies customers, employees, and partners who are authorized to work on a VxRail
cluster.
Verify the VxRail version on the VxRail cluster
About this task
The VxRail cluster must be running VxRail 7.0.450 or 8.0.x or later versions. You can enable LAG with two, four, six, or eight
ports that support VxRail networking.
Steps
1. Connect to the VMware vCenter Server instance that supports the VxRail cluster.
2. From the VMware vSphere Web Client, click the Inventory icon.
3. Select a VxRail cluster and click the Configure tab.
4. Expand VxRail, and click System.
Verify the health state of the VxRail cluster
Verify that the VxRail cluster is healthy.
About this task
CAUTION: If the VxRail cluster is not healthy, you cannot enable dynamic LAG.
Steps
1. From the VMware vSphere Web Client, click the Inventory icon.
2. Select a VxRail cluster, and click the Monitor tab.
3. Expand VxRail, and click Physical View.
4. Verify that the VxRail cluster status is Healthy.
Manage network settings
43
Verify the VMware VDS health status
The VMware VDS must be in a healthy state.
About this task
CAUTION: You cannot enable LAG if the VxRail cluster is in an unhealthy state.
Steps
1. From the VMware vSphere Web Client, select the Networking icon.
2. Select the VMware VDS that supports the VxRail cluster network, and click the Configure tab.
3. Expand Settings, and click Health Check.
4. To enable or disable health check, click Edit.
5. In the Edit Health Check Settings window, do the following:
a.
b.
c.
d.
e.
Under VLAN and MTU, from the State menu, select Enabled.
In the Interval box, enter the interval for the VLAN and MTU health check. The default value is 1 minute.
Under Teaming and failover, from the State menu, select Enabled.
In the Interval box, enter the interval for the Teaming and failover health check. The default value is 1 minute.
Click OK.
6. Click the Monitor tab, and click Health.
7. Confirm that the VMware VDS switch is in a healthy state.
8. Disable the VMware VDS health service.
Health check can detect incorrect network conditions when LAG is enabled.
Verify the VMware VDS uplinks
Verify that the minimum number of uplinks are assigned to support VxRail.
About this task
The following minimum uplinks are required for a VxRail cluster configuration:
● For one VMware VDS, two uplinks are required.
● For two VMware VDS, two uplinks per VMware VDS are required.
CAUTION: Do not proceed with this task unless the required minimum uplinks are assigned to support the VxRail
network.
Steps
1. From the VMware vSphere Web Client, click the Networking icon.
2. Right-click the VMware VDS that supports the VxRail cluster network that is targeted for LAG.
3. Select Settings > Edit Settings.
4. Select Uplinks.
5. Verify that the number of uplinks that are assigned to the VMware VDS support LAG.
Confirm isolation of the VxRail port group
Confirm the VxRail port groups targeted for LAG.
About this task
LAG is supported on all networks. You can apply LAG on either or both networks.
Steps
1. From the VMware vSphere Web Client, click the Networking icon.
44
Manage network settings
2. Select the VMware VDS switch that supports the VxRail cluster network, and click the Networks tab.
3. Under the Distributed Port Groups tab, select the port group.
4. Right-click the selected port group, and then click Edit Settings.
5. In the Distributed Port Group - Edit Settings page, click Teaming and failover.
6. Select the two uplinks that are assigned to the port group.
7. Open each port group that represents the management networks (management, VxRail management, and VMware vCenter
Server).
Identify the NICs for LAG
Identify the NICs that are targeted for LAG.
About this task
If you have already identified the switch ports that support LAG, you can skip this task.
Steps
1. From the VMware vSphere Web Client, select the VMware VDS that is targeted for LAG.
2. Click the Configure tab.
3. Expand Settings and click Topology.
4. Expand the two uplinks that support the VxRail networks.
Manage network settings
45
5. Locate the VMNICs that are assigned to each uplink.
Identify assignment of the NICs to node ports
Identify assignment of the NICs to node ports.
Steps
1. Open a browser to the iDRAC console on one of the nodes.
2. Log in as root.
3. Open a virtual console session to the VxRail node.
4. Select Keyboard from the top toolbar, and click F2.
5. Log in to the VMware ESXi operating system as root.
6. Go to Troubleshooting Options, and select Enable ESXi Shell.
7. On the virtual keyboard, click Alt-F1.
8. Log in to the VMware ESXi console as root.
9. To obtain the MAC address and description to identify each VMNIC, enter:
esxcli network nic list
Identify the switch ports that are targeted for LAG using LLDP
The ToR switches must support LLDP discovery to identify the switch ports. Do not perform this task if the switch does not
support LLDP discovery.
About this task
Skip this task and go to Identify the switch ports that are targeted for LAG using iDRAC if you do not have console access to
the ToR switches or if the ToR switches do not support LLDP discovery.
The command syntax in this task is based on Dell OS10. The command differs from model to model and vendor to vendor.
Contact your technical support team or see your switch vendor documentation.
Steps
1. Open a console session to the ToR switches that support the VxRail cluster.
2. To identify the VMNICs that are connected for each node, enter:
46
Manage network settings
show lldp neighbors | grep <hostname>
ethernet1/1/1
ethernet1/1/2
mrm-wd-n4.mrmvxra... e4:43:4b:5e:01:e0
mrm-wd-n4.mrmvxra... f4:e9:d4:09:7d:5f
ethernet1/1/1
ethernet1/1/2
mrm-wd-n4.mrmvxra...
mrm-wd-n4.mrmvxra...
f4:e9:d4:09:7d:5e
e4:43:4b:5e:01:e1
vmnic0
vmnic5
vmnic4
vmnic1
● In this example, VMNIC0 and VMNIC4 are assigned to the VxRail network that is not targeted for LAG. The VMNIC1 and
VMNIC5 are assigned to the VxRail network that is targeted for LAG.
● The VMNIC1 and VMNIC2 are connected to separate switches.
● The MAC address for each pairing is different. This indicates that the source adapter for one NIC port is on the NDC and
the other NIC port is on a PCIe adapter card.
3. Use the VMNIC values captured from the switch topology view in the vClient to identify the switch ports planned for link
aggregation.
4. Repeat the query for each VMware ESXi hostname to discover the NICs.
Identify the switch ports that are targeted for LAG using iDRAC
If the ToR switches do not support LLDP discovery, use iDRAC to identify the switch port connection.
Prerequisites
Verify that you have connectivity to the iDRAC on each VxRail node.
Steps
1. Log in to the iDRAC on a VxRail node as root.
2. Select the System view.
3. From the Overview tab, click Network Devices to view the NDC and PCIe adapter cards.
4. To view the switch port assignment for each NDC port and any of the unused PCIe based ports, perform the following:
a. Select Integrated NIC to view the NDC-OCP port properties.
b. Select NIC Slot to view the PCIe based port properties.
c. Select Summary.
The Switch Port Connection ID column identifies the switch port connection. The MAC address under Switch
Connection ID for each view differ, indicating that each port is connected to a different switch
5. Repeat the iDRAC query for each of VxRail node to discover the switch port connections.
Prepare the switches for multichassis LAG
To enable multichassis link aggregation across a pair of switches, configure VLT between the switches. VLT supports the
aggregation of the ports terminating on separate switches.
Prerequisites
Verify that the ToR switches that support the VxRail cluster also support VLT.
About this task
For Dell operating system 10, VLT configures a logical connection to enable LAG across a pair of switches. The command syntax
that is shown in this task is based on Dell operating system 10. The command differs from model to model and vendor to vendor.
See your switch vendor documentation or contact your technical support team for more information. For the Dell switch,
confirm that the firmware is greater than 10.5.3.0 and set each port with LACP individual function. For a non-Dell switch, check
each port with LACP individual function.
Manage network settings
47
Steps
1. Connect the Ethernet cables between one or two pairs of ports on each switch.
2. For a multichassis LAG, configure a VLT trunk between the switches.
3. To view the configuration on each switch, enter:
show running-configuration vlt
!
vlt-domain 255
backup destination 172.17.186.204
discovery-interface ethernet 1/1/29-1/1/30
peer-routing
vlt-mac 59:9a:4c:da:5d:30
4. Configure a port channel to support LAG with the node ports.
●
●
●
●
●
A port channel is configured for each node in the VxRail cluster.
For a multichassis link aggregation, port channels are configured on both switches.
For a multichassis link aggregation, the port channel ID values must match on both switches.
Define the VLAN or VLANs for the VxRail networks that are targeted for link aggregation.
For each port channel, LACP individual function is enabled.
To view the configuration on a port channel, enter: show running-configuration interface port-channel
100
interface port-channel100
description "Node2 VPC"
no shutdown
switchport mode trunk
switchport trunk allowed vlan 202
mtu 9216
vlt-port-channel 100
lacp individual
5. (Optional) If the STP was enabled in the network, set the port channel to STP portfast mode to avoid temporary network
loss during STP convergence. The command to set STP to portfast depends on the model of the switch. As the command
is different from model to model and vendor to vendor, contact your physical switch vendor for more detailed information
about how to configure. For example:
Cisco switch:
● spanning-tree portfast (for an access port)
● spanning-tree portfast trunk (for a trunk port)
Dell switch:
● spanning-tree port type edge (for an access port)
● spanning-tree port type edge trunk (for a trunk port)
Identify the load-balancing policy on the switches
The command syntax that is shown in this task is based on Dell Operating System 10. The command differs from model to model
and vendor to vendor. See your switch vendor documentation or contact your technical support team for more information.
Steps
1. To view the load-balancing policies set on the switch, enter:
show load-balance
Load-Balancing Configuration For LAG and ECMP:
----------------------------------------------
48
Manage network settings
IPV4 Load Balancing
: Enabled
IPV6 Load Blaancing
: Enabled
MAC Load Balancing
: Enabled
TCP-UDP Load Balancing
: Enabled
Ingress Port Load Balancing
: Disabled
IPV4 FIELDS
: source-ip destination-ip protocol vlan-id l4-destination-port
l4-source-port
IPV6 FIELDS
: source-ip destination-ip protocol vlan-id l4-destination-port
l4-source-port
MAC FIELDS
: source-mac destination-mac ethertype vlan-id
TCP-UDP FIELDS
: l4-destination-port l4-source-port
2. Verify that the load-balancing policy on the switches align with the load-balancing policy that is to be configured on the
VxRail network.
Configure the LACP policy on the VxRail VDS
Configure the LACP policy on the VxRail VDS.
Steps
1. From the VMware vSphere Web Client, click the Networking icon.
2. Select the VMware VDS on which you want to configure the LACP policy, and click the Configure tab.
3. Expand Settings and click LACP.
4. Under MIGRATING NETWORK TRAFFIC TO LAGS, click NEW.
5. In the New Link Aggregation Group window, enter the following:
● Name: <name>
● Number of ports: 2
● Mode
○ Active: Initiate negotiation with the remote ports by sending the LACP packets. If the LAGs on the physical switch
are in Active mode, set the LACP policy mode to either Active or Passive.
○ Passive: Responds to the LACP packet that it receives but does not initiate LACP negotiation. If the LAGs on the
physical switch are in Passive mode, set the LACP policy mode to Active.
● Load balancing mode: Select the load-balancing algorithm that aligns with the ToR switch settings, and click OK.
6. Click Topology.
7. Verify that the LACP policy is listed for the uplink selection.
Manage network settings
49
Migrate the LACP policy to standby uplink
Migrate the LACP policy to the standby uplink on the target port group.
Steps
1. From the VMware vSphere Web Client, click the Inventory icon.
2. Right-click the VMware VDS on which you want to migrate the LACP policy to the standby uplink .
3. Select Distributed Port Group > Manage Distributed Port Groups.
4. On the Select port group policies page, select Teaming and failover, and then click Next.
5. On the Select port groups page, select a single port group or two port groups (VMware vSAN or VMware vSphere
vMotion) to assign for the LACP policy, and click Next.
6. On the Teaming and failover page, under Failover order section, use the UP and DOWN arrows to migrate between the
uplinks.
a. Migrate the LACP policy to Active uplinks.
b. Migrate the remaining uplinks to Unused uplinks.
c. Repeat steps a and b for all port groups.
7. On the Ready to complete page, review the changes, and click FINISH.
8. A warning message displays while migrating the physical adapters. Click OK to discard the issues and proceed or Cancel to
review your changes.
9. Verify that one of the ports is connect to LAG. Yellow connections in the following example indicate that connections are
applied to all port groups.
50
Manage network settings
10. To view the status of the switch, enter:
show port-channel summary
11. Verify that (IND) and (P) are displayed next to each of the ports.
Migrate an unused uplink to a LAG port
You can temporarily assign the VMNICs to LAG ports. The LAG ports must peer with the switch ports to complete the LAG
process.
Steps
1. Right-click the VMware VDS that is targeted for LAG, and click Add and Manage Hosts.
2. On the Select task page, select Manage host networking and click NEXT .
3. On the Select hosts page, select all the member hosts and click NEXT.
Manage network settings
51
4. On the Manage physical adapters page, select one VMNIC to assign an uplink on each host.
5. Repeat the process of assigning uplinks to all the hosts, and click Next.
6. Review the uplink reassignment.
In the above example, vmnic1 and vmnic5, which support the network that is targeted for link aggregation, were
unassigned from uplink2 and uplink4 and reassigned to the two ports that are attached to the LACP policy.
7. Skip the rest of the screens and click FINISH.
52
Manage network settings
Configure the first switch for LAG
The port on the switch connecting the VMNIC moved to the LACP policy is added to the port channel. In this example, move
the VMNIC1 to the LAG and then move the LAG into the port channel for each node.
Steps
1. Open a console to the ToR switches.
2. To confirm the switch port for the VMNIC connection using LLDP, enter:
show lldp neighbors | grep <vmnic>
ethernet1/1/3
ethernet1/1/6
ethernet1/1/9
ethernet1/1/12
crkm01esx03.crk.v...
crkm01esx04.crk.v...
crkm01esx01.crk.v...
crkm01esx02.crk.v...
b8:59:9f:58:44:a5
b8:59:9f:58:45:55
b8:59:9f:58:49:7d
b8:59:9f:58:49:dd
vmnic1
vmnic1
vmnic1
vmnic1
3. To configure the switch interface and set the channel group to Active, enter:
interface ethernet 1/1/9
channel-group 101 mode active
4. Repeat these steps for each switch interface that is configured into the LACP policy.
Verify LAG connectivity on the switch
Verify the port channel and LACP counters on ToR switches.
Steps
1. To verify the port channels of the switch, enter:
show port-channel summary
Flags: D - Down I - member up but inactive
P - member up and active
U - Up (port-channel)
F - Fallback Activated
--------------------------------------------------------------------------Group
Port-Channel
Type
Protocol
Member Ports
--------------------------------------------------------------------------101
port-channel101 (U)
Eth
DYNAMIC
1/1/9 (P)
102
port-channel102 (U)
Eth
DYNAMIC
1/1/12(P)
103
port-channel103 (U)
Eth
DYNAMIC
1/1/3 (P)
104
port-channel104 (U)
Eth
DYNAMIC
1/1/6 (P)
2. To view the LACP counters on the switches for errors, enter:
show lacp counter
LACPDUs Port
Marker
Marker Response LACPDUs
Sent
Recv
Sent Recv
Sent
Recvs
Err Pkts
-------------------------------------------------------------------------ethernet1/1/9
0
0
0
0
18
15
0
ethernet1/1/12
0
0
0
0
17
14
0
ethernet1/1/3
0
0
0
0
16
13
0
ethernet1/1/6
0
0
0
0
15
10
0
Manage network settings
53
3. For a multi-chassis LAG, to verify the port channel status for both the VLT peers, enter:
show vlt
<id> vlt-port-detail
Verify LAG connectivity on VxRail nodes
Verify the LACP connectivity on the VMware VDS.
Steps
1. Open a VMware ESXi console session to a VxRail node.
2. To verify the LACP counters on the VMware ESXi console, enter:
esxcli network vswitch dvs vmware lacp stats get
DVSwitch
------------------crk-m01-c01-vds01
LAGID
NIC
Rx Errors
---------- -------- ---------3247427758 vmnic1
0
Rx LACPDUs
---------21
Tx Errors
--------0
Tx LACPDUs
---------89
3. Repeat this procedure on the other VxRail nodes to validate the LACP status.
Move VMware vSAN or VMware vSphere vMotion traffic to LAG
Once the LAG is enabled with a single connected interface, you can migrate the VMware vSAN or VMware vSphere vMotion
traffic to the LAG.
Steps
1. From the VMware vSphere Web Client, click the Networking icon.
2. Right-click the VMware VDS that is targeted for LAG.
3. Select Distributed Port Group > Manage Distributed Port Groups.
4. On the Select port group policies page, select Teaming and Failover, and then click Next.
5. On the Select port groups page, select the VMware vSAN or VMware vSphere vMotion distributed port groups and click
Next.
6. On the Teaming and failover page, click MOVE UP and MOVE DOWN to move the LACP policy to Active uplinks and all
the other uplinks to Unused uplinks, and then click Next.
54
Manage network settings
7. On the Ready to complete page, review the changes, and click FINISH.
Verify that LAG is configured in the VMware VDS
Verify that LAG is active on the VMware VDS port groups.
Prerequisites
Configure the LAG for the VMNIC on all the VxRail nodes.
Steps
1. From the VMware vSphere Web Client, select the VMware VDS that is targeted for LAG.
2. Select the Configure tab and click Topology.
3. Select the LAG and verify that the specified VMNIC is assigned to the uplink against the LAG.
Move the second VMNIC to LAG
Migrate the second VMNIC that supports VMware vSAN and VMware vSphere vMotion traffic to LAG.
Steps
1. Right-click the VMware VDS, and click Add and Manage Hosts.
2. On the Select task page, select Manage host networking and click NEXT.
3. On the Select hosts page, under Member hosts, select all the hosts in the VxRail cluster and click NEXT.
4. For the second port on the LAG, select the VMNIC associated with the uplink that is not used for VMware vSAN and
VMware vSphere vMotion.
5. Select the VMNIC on the first host.
6. Select Unassign adapter.
7. Enable Apply this operation to all other hosts.
8. Click UNASSIGN.
9. Select the same NIC under the On other switches/unclaimed list.
10. Select Assign uplink.
11. Assign the uplink to an available port on the LAG.
12. Select Apply uplink assignment to rest of the hosts and click OK.
13. Review the uplink assignment.
Manage network settings
55
In this example, the unused uplink assigned vmnic2 has been unassigned from the uplink2 and reassigned the second port
that is attached to the LAG.
14. Skip the remaining screens and click Finish.
Configure the second ToR switch for LAG
After you move VMNIC to LAG on the VMware VDS, the switch interface that is connected to the VMNIC is added to the port
channel. Move the second VMNIC into the port channel for each node. Migrate the second switch interface that supports the
VMware vSAN or VMware vSphere vMotion to a port channel.
Steps
1. Open a console session to the second ToR switch.
2. To confirm the VMNIC that is connected to LLDP, enter:
show lldp neighbors | grep <vmnic>
26-II-TOR-A# show
ethernet1/1/1
ethernet1/1/5
ethernet1/1/7
ethernet1/1/10
lldp neighbors | grep crkm01 | grep vmnic2
crkm01esx03.crk.v...
04:3f:72:c3:77:78
crkm01esx04.crk.v...
04:3f:72:c3:77:7c
crkm01esx01.crk.v...
04:3f:72:c3:77:28
crkm01esx02.crk.v...
04:3f:72:c2:09:2c
3. To configure the switch interface, enter:
26-II-TOR-A(config)# interface ethernet 1/1/7
4. To set the channel group to active, enter:
26-II-TOR-A(conf-if-eth1/1/17# channel-group 101 mode active
5. For the remaining interfaces, set the channel group to active.
Verify LAG connectivity on the second switch
Verify the port channel and LACP counters on a ToR switch.
Steps
1. To verify that the switch port channels are up and active, enter:
56
Manage network settings
vmnic2
vmnic2
vmnic2
vmnic2
show port-channel summary
Flags: D - Down I - member up but inactive
P - member up and active
U - Up (port-channel)
F - Fallback Activated
--------------------------------------------------------------------------Group
Port-Channel
Type
Protocol
Member Ports
--------------------------------------------------------------------------101
port-channel101 (U)
Eth DYNAMIC
1/1/7 (P)
102
port-channel102 (U)
Eth
DYNAMIC
1/1/10(P)
103
port-channel103 (U)
Eth
DYNAMIC
1/1/1 (P)
104
port-channel104 (U)
Eth
DYNAMIC
1/1/5 (P)
2. To view the LACP counters for errors, enter:
show lacp counter
LACPDUs Port
Marker
Marker Response LACPDUs
Sent
Recv
Sent Recv
Sent
Recvs
Err Pkts
-------------------------------------------------------------------------ethernet1/1/7
0
0
0
0
14
11
0
ethernet1/1/10
0
0
0
0
13
9
0
ethernet1/1/1
0
0
0
0
12
10
0
ethernet1/1/5
0
0
0
0
10
7
0
3. For a multichassis LAG, to verify that the port channel status for both VLT peers are active, enter:
show vlt <id> vlt-port-detail
Verify LAG connectivity on VxRail nodes
Verify LACP connectivity on the VMware VDS.
Steps
1. Open a VMware ESXi console session to a VxRail node.
2. To verify the LACP counters on the VMware ESXi console, enter:
esxcli network vswitch dvs vmware lacp stats get
DVSwitch
------------------crk-m01-c01-vds01
crk-m01-c01-vds01
LAGID
NIC
Rx Errors
---------- -------- ---------3247427758 vmnic2
0
3247427758 vmnic1
0
Rx LACPDUs
---------17
243
Tx Errors
--------0
0
Tx LACPDUs
---------62
312
3. Repeat these steps on the other VxRail nodes to validate the LACP status.
Enable network redundancy across NDC and PCIe
ports
Enable network redundancy after the VxRail deployment. Migrate the VxRail network traffic on a node from the NDC port to
both NDC and PCIe ports.
You must be able to configure the adjacent ToR switches to complete this task.
This procedure is intended for Dell Technologies customers, employees, and partners who are authorized to work on a VxRail
cluster.
This procedure applies to the VxRail cluster running the VxRail version 8.0.x and later.
Manage network settings
57
Network redundancy options
Review the network redundancy options and select the option that fits your requirements. Using the following examples,
populate the table for your requirements.
The following table provides an example of four NDC port to two NDC and two PCIE ports:
Uplink
Starting uplink
configuration
Starting VMNIC
assignment
Ending uplink
configuration
Ending VMNIC
assignment
uplink1
Management
VMNIC0 (NDC)
Management
VMNIC0 (NDC)
uplink2
Management
VMNIC1 (NDC)
Management
VMNIC4 (PCIE)
uplink3
VMware vSAN/VMware
vSphere vMotion
VMNIC2 (NDC)
VMware vSAN/VMware
vSphere vMotion
VMNIC2 (NDC)
uplink4
VMware vSAN/VMware
vSphere vMotion
VMNIC3 (NDC)
VMware vSAN/VMware
vSphere vMotion
VMNIC5 (PCIE)
The following table provides an example of two NDC ports to one NDC and one PCIE port:
Uplink
Starting uplink
configuration
Starting VMNIC
assignment
Ending uplink
configuration
Ending VMNIC
assignment
uplink1
Management/VMware
vSAN/VMware vSphere
vMotion
VMNIC0 (NDC)
Management/VMware
vSAN/VMware vSphere
vMotion
VMNIC0 (NDC)
uplink2
Management/VMware
vSAN/VMware vSphere
vMotion
VMNIC1 (NDC)
Management/VMware
vSAN/VMware vSphere
vMotion
VMNIC4(PCIE)
uplink3
N/A
N/A
N/A
N/A
uplink4
N/A
N/A
N/A
N/A
The following table provides an example of two NDC ports to two NDC and two PCIE ports:
Uplink
Starting uplink
Configuration
Starting VMNIC
assignment
Ending uplink
Configuration
Ending VMNIC
assignment
uplink1
Management/VMware
vSAN/VMware vSphere
vMotion
VMNIC0 (NDC)
Management
VMNIC0 (NDC)
uplink2
Management/VMware
vSAN/VMware vSphere
vMotion
VMNIC1 (NDC)
Management
VMNIC4 (PCIE)
uplink3
N/A
N/A
VMware vSAN/VMware
vSphere vMotion
VMNIC1 (NDC)
uplink4
N/A
N/A
VMware vSAN/VMware
vSphere vMotion
VMNIC5 (PCIE)
The following table provides an example of four NDC ports to one NDC and one PCIE ports:
Uplink
Starting uplink
Configuration
Starting VMNIC
assignment
Ending uplink
Configuration
Ending VMNIC
assignment
uplink1
Management
VMNIC0 (NDC)
Management/VMware
vSAN/VMware vSphere
vMotion
VMNIC0 (NDC)
uplink2
Management
VMNIC1 (NDC)
Management/VMware
vSAN/VMware vSphere
vMotion
VMNIC4 (PCIE)
58
Manage network settings
Uplink
Starting uplink
Configuration
Starting VMNIC
assignment
Ending uplink
Configuration
Ending VMNIC
assignment
uplink3
VMware vSAN/VMware
vSphere vMotion
VMNIC2 (NDC)
N/A
N/A
uplink4
VMware vSAN/VMware
vSphere vMotion
VMNIC3 (NDC)
N/A
N/A
The following table provides an example of N ports to N ports:
Uplink
Starting uplink
configuration
Starting VMNIC
assignment
Ending uplink
Configuration
Ending VMNIC
assignment
uplink1
Management
VMNIC0 (NDC)
Management
VMNIC0 (NDC)
uplink2
Management
VMNIC1 (NDC)
Management
VMNIC6 (PCIE)
uplink3
VMware vSAN
VMNIC2 (NDC)
VMware vSAN
VMNIC2 (NDC)
uplink4
VMware vSAN
VMNIC3 (NDC)
VMware vSAN
VMNIC7 (PCIE)
uplink5
VMware vSphere vMotion
VMNIC4 (NDC)
VMware vSphere
vMotion
VMNIC4 (NDC)
uplink6
VMware vSphere vMotion
VMNIC5 (NDC)
VMware vSphere
vMotion
VMNIC8 (PCIE)
Ending uplink
configuration
Ending VMNIC
assignment
Populate the grid with the uplink names and VMNIC names:
Uplink
Starting uplink
configuration
Starting VMNIC
assignment
uplink1
uplink2
uplink3
uplink4
Verify that the VxRail version supports network redundancy
Check your VxRail version to determine whether network redundancy is supported.
Steps
1. Open the VMware vSphere Web Client and connect to the VMware vCenter Server instance that supports the VxRail
cluster.
2. Select Home > Hosts and Clusters.
3. Select the VxRail cluster to enable network redundancy.
4. Select Configure > VxRail > System.
5. Confirm that the VxRail version supports network redundancy.
Verify that the VxRail cluster is healthy
Validate the VxRail cluster health status.
Prerequisites
Verify access to the VMware vCenter Server that supports the VxRail cluster.
Steps
1. From the VMware vSphere Web Client, select the VxRail cluster in which you want to enable network redundancy.
Manage network settings
59
2. Select the Monitor tab.
3. From the left-menu, select VxRail > Physical View.
4. Verify that the Health State is healthy.
Verify the VxRail physical network compatibility
Check the physical network adapters of the VxRail nodes to verify the planned ending network configuration.
Steps
1. Log in to VMware vSphere Web Client as an administrator.
2. Select Home > Hosts and Clusters > VxRail Cluster.
3. From the VxRail clusters, select a node.
4. Select Configure > Networking > Physical adapters.
5. View the physical adapters serving as an uplink to the VMware VDS. In the following figure, VMNIC 0, VMNIC 1, VMNIC 2,
and VMNIC 3 are connected to a single VMware VDS at a connection speed of 10 Gbps. There are four NDC ports. If your
cluster has only two NDC ports, only two VMNICs are visible.
6. View the unused physical adapters. In the following figure, VMNIC 4 and VMNIC 5 are PCIe network ports. The connection
speed is 10 Gbps and is compatible with the NDC ports.
7. Repeat these steps for each node in the VxRail cluster.
60
Manage network settings
Verify the physical switch port configuration
Validate the physical switch port configuration. Repeat the steps in this task for each port on each switch port that supports
VxRail network traffic.
Prerequisites
Ensure that you have access to the adjacent ToR switches.
To discover the VxRail node connections, your switch operating system must support the LLDP neighbor functionality.
About this task
The command syntax that is shown in this task is based on Dell OS10. As the command differs from model to model and vendor
to vendor, contact your technical support team or see your switch vendor documentation for more details.
Steps
1. Open a console session to one of the Ethernet switches that supports the VxRail cluster.
2. To verify the ports that are connected to the VxRail nodes and VMNIC assignment, enter:
show lldp neighbors | grep vmnic
Following are the sample outputs shown for two different switches:
18KK-TOR-A# show lldp neighbors | grep vmnic
ethernet1/1/3
ethernet1/1/4
ethernet1/1/5
ethernet1/1/6
ethernet1/1/7
ethernet1/1/8
mrm-md-nl.mrmvxa...
mrm-md-nl.mrmvxa...
mrm-md-n3.mrmvxa...
mrm-md-n3.mrmvxa...
mrm-md-n2.mrmvxa...
mrm-md-n2.mrmvxa...
e4:43:4b:5e:04:f0
e4:43:4b:5e:04:f2
e4:43:4b:5e:07:90
e4:43:4b:5e:07:92
e4:43:4b:5f:84:50
e4:43:4b:5f:84:52
vmnic0
vmnic2
vmnic0
vmnic2
vmnic0
vmnic2
18KK-TOR-B# show lldp neighbors | grep vmnic
ethernet1/1/3
ethernet1/1/4
ethernet1/1/5
ethernet1/1/6
ethernet1/1/7
ethernet1/1/8
mrm-md-nl.mrmvxa...
mrm-md-nl.mrmvxa...
mrm-md-n3.mrmvxa...
mrm-md-n3.mrmvxa...
mrm-md-n2.mrmvxa...
mrm-md-n2.mrmvxa...
e4:43:4b:5e:04:f1
e4:43:4b:5e:04:f3
e4:43:4b:5e:07:91
e4:43:4b:5e:07:93
e4:43:4b:5f:84:51
e4:43:4b:5f:84:53
vmnic1
vmnic3
vmnic1
vmnic3
vmnic1
vmnic3
3. Identify a switch port that supports VxRail network traffic.
4. Identify an unused switch port that is planned as target port to enable network redundancy.
5. To ensure the switch port that supports the VxRail network traffic after migration has a compatible configuration, perform
the following:
a. Verify that the VLANs used for VxRail networks (external management, internal management, VMware vSphere vMotion,
VMware vSAN, and guest networks) are compatible on both the switch ports.
b. Verify that the other switch port settings are compatible on both the switch ports.
c. If your final configuration reduces the number of uplinks, verify that the VLANs, and the other switch port settings are
consolidated into the target switch ports.
The following is a sample switch configuration for a source NDC port and a target PCIe port:
interface ethernet1/1/3
description VxRail-NDC-Port
no shutdown
switchport mode trunk
switchport access vlan 1386
switchport trunk allowed vlan 100-103,3939
mtu 9216
flowcontrol receive on
flowcontrol transmit off
spanning-tree port type edge
exit
Manage network settings
61
interface ethernet1/1/16
description VxRail-PCIe-Port
no shutdown
switchport mode trunk
switchport access vlan 1386
switchport trunk allowed vlan 100-103,3939
mtu 9216
flowcontrol receive on
flowcontrol transmit off
spanning-tree port type edge
exit
NOTE: Configure the Ethernet ports before you enable the network redundancy for the VxRail cluster.
Verify active uplink on the VMware VDS port groups post migration
Verify at least one uplink in each VMware VDS port group is active after the migration.
Prerequisites
Ensure that you have access to the planning grid table Enable network redundancy across NDC and PCIe ports.
Review the planning grid table that is populated with the starting and ending network configuration to identify any uplinks that
are disconnected as part of the uplink reassignment process.
Steps
1. From the VMware vSphere Web Client, select Networking.
2. Right-click the VMware HCIA Distributed Switch.
3. Select Distributed Port Group > Manage Distributed Port Groups.
4. Select Teaming and Failover.
5. Select all the VMware VDS port groups.
6. Verify at least one of the active uplinks in the failover order is not disconnected during the migration task.
7. If there is an uplink under Active uplinks, that uplink gets disconnected during the migration. Modify the failover order to
move an uplink that might not get disconnected during the migration to Active uplinks.
Add uplinks to the VMware VDS
Add the VMware VDS uplinks before migrating the VMNICs.
Prerequisites
Review the planning grid table populated in Enable network redundancy across NDC and PCIe ports.
Steps
1. To add the uplinks to the VMware VDS, perform the following:
a.
b.
c.
d.
From the VMware vSphere Web Client, select Networking inventory view.
Right-click the VMware HCIA Distributed Switch and select Settings > Edit Settings.
Click Uplinks to display the existing uplinks.
Click ADD to add the uplinks according to the planning grid table populated in Enable network redundancy across NDC
and PCIe ports and click OK.
2. Skip this task if you are removing or not changing the uplinks.
62
Manage network settings
Migrate the VxRail network traffic to a new VMNIC
Change the VxRail network traffic to use a new VMNIC.
Prerequisites
Review the planning grid table in Enable network redundancy across NDC and PCIe ports.
Steps
1. From the VMware vSphere Web Client, select Networking.
2. From the VxRail Datacenter menu, right-click VMware HCIA Distributed Switch.
3. Click Add and Manage Hosts... and click Manage host networking.
4. Select all the hosts in the VxRail cluster and click NEXT.
5. From the left-menu, select Manage physical adapters to review the existing VMNICs and uplinks mapping.
In
●
●
●
the example below:
Four uplinks on the VMware VDS are linked to four VMNICs.
The VMNIC0 to VMNIC3 are backed by ports on an NDC physical adapter.
VMNIC4 and VMNIC5 are unassigned and backed by ports on a PCIe adapter.
6. Use the planning grid table in Enable network redundancy across NDC and PCIe ports to set and update the VMNIC and
uplink mapping.
In
●
●
●
●
the example below:
VMNIC1 from an NDC-based adapter is unassigned from uplink2.
VMNIC3 from an NDC-based adapter is unassigned from uplink4.
VMNIC4 from a PCIe-based adapter is assigned to uplink2.
VMNIC5 from a PCIe-based adapter is assigned to uplink4.
Manage network settings
63
7. Click NEXT.
8. From the VMware HCIA Distributed Switch > Add and Manage Hosts menu, click Manage VMkernel adapters. Do not
migrate any network on the Manage VMkernel adapters window.
9. Click NEXT.
10. From the Migrate VM networking window, click NEXT > FINISH.
11. Monitor the network migration progress until it is complete.
Set the port group teaming and failover policies
Configure the teaming and failover settings for the VMware VDS port groups.
Prerequisites
Go to Enable network redundancy across NDC and PCIe ports to identify the VMNICs that are assigned and unassigned to the
VMware VDS port groups. Identify the ending uplinks from the planning grid table and the VMware VDS port groups that are
assigned to each uplink.
Steps
1. From the VMware vSphere Web Client, select Networking.
2. Right-click the VMware HCIA Distributed Switch.
3. Select a VMware VDS port group to modify for the network reconfiguration.
64
Manage network settings
4. Move the uplinks up and down to align with the ending network configuration from the planning grid table.
5. Move any unused uplinks and remove as part of the reconfiguration process and click OK.
6. Select the next VMware VDS port group that you plan to modify for the network reconfiguration.
7. Move the uplinks up and down to align with the ending network configuration from the planning grid table.
8. Move any unused uplinks and remove as part of the reconfiguration process and click OK.
Remove the uplinks from the VMware VDS
Remove the unused uplinks from the VMware VDS. For example, four NDC to one NDC, one PCIe.
Prerequisites
Go to Enable network redundancy across NDC and PCIe ports to identify the uplinks removed from the VMware VDS port
groups. Identify any uplinks that are listed in the starting network configuration column of planning grid table that is not listed in
the ending network configuration.
Steps
1. From the VMware vSphere Web Client, select Networking.
2. Right-click the VMware VDS.
3. Select Settings > Edit Settings.
4. Click Uplinks.
5. Next to each uplink you want to remove, click REMOVE, and then click OK.
Reset the VMware vSphere alerts for network uplink redundancy
Reset the network uplink redundancy alerts.
Steps
1. Log in to the VMware vSphere Web Client as an administrator.
2. Select vCenter > Hosts and Clusters.
3. Select the VxRail cluster to perform the network migration.
4. Select a host in the VxRail cluster and select the Summary view.
Manage network settings
65
5. Select Reset to Green to silence alarms.
6. Repeat these steps for each host in the VxRail cluster.
Enable VMware vSAN RDMA in the VxRail cluster
VMware vSphere supports VMware vSAN Remote Direct Memory Access (RDMA). RDMA allows direct access from the
memory of one system to the memory of another without using the operating system or CPU. The memory transfer is offloaded
to the host channel adapters with RDMA enabled.
Prerequisites
●
●
●
●
●
Complete the VxRail cluster Day 1 bring-up.
Verify that there are no critical alarms in the cluster.
Verify that the VMware vSAN is in a healthy state.
Configure the DCB-capable switch. Verify that the RDMA-enabled physical NIC is configured for lossless traffic.
To ensure a lossless SAN, configure the data center bridging (DCB) mode as IEEE.
○ Set the priority flow control (PFC) value to CoS priority 3, per VMware.
○ See the operation guide from the physical switch vendor to set up the outside network environment to match the data
center cluster network strategy and topology.
● Disable the VMware vSAN large-scale cluster support (LSCS) feature. VxRail enables VMware vSAN LSCS as a default
setting during the VxRail cluster setup. LSCS conflicts with the VMware vSAN RDMA and must be disabled to use the
VMware vSAN RDMA.
About this task
This procedure is intended for Dell Technologies customers, employees, and partners who are authorized to work on a VxRail
cluster.
This procedure applies to the VxRail cluster running the VxRail version 8.0.x and later. VxRail version 8.0.010 does not support
VMware vSAN ESA.
The physical NIC cards depend on the project requirements. Only Mellanox NICs are supported.
The RDMA pNIC is dedicated to the storage network.
All hosts in the cluster must support RDMA. If any host loses RDMA support, the entire VMware vSAN cluster switches to TCP.
See VMware vSphere RDMA for more information.
Steps
1. The VMware vSAN interface for LSCS in the VMware vCenter Server is not present. VMware has re-enabled the SDK
interface to allow configuration options for the VMware vSAN LSCS feature to set up VMware vSAN RDMA. See KB 2110081
and follow the SDK steps for large-scale configurations.
See Set-VsanClusterConfiguration commands for more information.
2. To place the host into maintenance mode and configure advanced settings, enter:
esxcli system settings advanced set -o /VSAN/goto11 -i 0
a. To adjust the TCP/IP heap size, if needed, enter:
esxcli system settings advanced set -o /Net/TcpipHeapMax -i XXXX
b. Manually reboot the host.
c. Repeat the process for each host in the cluster.
d. Disable LSCS to add a node to the cluster.
3. Verify that the physical NIC is applied as RDMA adapters.
For Mellanox NIC, see Configure RoCEv2 lossless fabric for VMware ESXi 6.5 and above.
4. The following step is only required if the switch detects multi-peer during the DCBx negotiation. On a Dell physical switch,
if the PFC operation show status is down or disabled, Multiple peers Detected message is displayed. To disable
multi-LLDP neighbor, perform the following:
a. To disable the VxRail-supplied vib service port-lldp in the VMware ESXi host, enter:
66
Manage network settings
/etc/init.d/port-lldpd disable
Disabling Port LLDP Service daemon
Port LLDP Service successfully disabled
b. Place the host into maintenance mode and individually reboot each host.
For Mellanox NIC, see the vendor documentation on disabling the hardware DCBx from Mellanox for VMware.
5. To enable RDMA support in the VMware vSAN service, perform the following:
a. Select Configure > vSAN > Services.
b. Under the Network section, click EDIT and enable the RDMA support.
Verify that there are no critical alarms in the VxRail cluster. Verify that the VMware vSAN and RDMA configurations are
healthy.
c. To verify the VMware vSAN health and the RDMA configuration health status, select Monitor > vSAN > System
Health > RDMA Configuration Health.
d. Under RDMA Configuration Health, check the health status.
Enable two VMware VDS for VxRail traffic
Change the VxRail nodes default NIC layout to a NIC level redundancy solution that provides more flexibility and high availability.
Prerequisites
● Verify that the configured node has the PCIE NIC with same speed.
● Verify that all the network VLANs and MTUs are configured on the physical switches before making the network profile
changes.
○ Verify that the new uplinks from all the newly configured ports are in compliance with the existing VLAN and MTU
configurations.
○ Any errors in configuration may result in data unavailability or data loss.
● Verify that the VxRail cluster is in a healthy state.
● For a dynamic node cluster, if the storage type is VSAN_HCI_MESH, complete the manual configuration for the remote
VMware vSAN cluster connection before you configure VMware VDS.
This procedure applies to the VxRail cluster running the VxRail version 8.0.x and later. See the VxRail 8.0.x Support Matrix for a
list of supported versions.
This procedure is intended for Dell Technologies employees and partners who are authorized to work on a VxRail cluster. This
procedure is not available to customers.
This procedure supports the following:
● Standard cluster deployment
● Customer-managed VMware VDS and VxRail-managed VMware VDS
● Two or four uplinks that are assigned to each of the VMware VDS
● Two uplinks per system port group
About this task
The following table provides the default names and VMkernel for each port group:
Port group
Default name
VMkernel
Management
Management Network-xxxxxx
vmk2
VMware vSAN
Virtual SAN-xxxxxxxx
vmk3
VMware vSphere vMotion
vSphere vMotion-xxxxxxxxxxx
vmk4
VxRail discovery
VxRail Management-xxxxxx
vmk0
Steps
1. To verify the port group details from one of the VxRail nodes, go to Home > Hosts and Clusters.
Manage network settings
67
2. Select a node and click Configure > Networking > VMkernel adapters.
3. Verify each VMkernel to obtain the accurate port group name by the Network Label.
Use case 1 - Enable uplinks
Change the cluster uplink configuration from four with one VMware VDS to two with two VMware VDS.
About this task
The following table provides the scenario for this use case:
Configuration
Uplinks
VMware VDS
Starting
4
1
Ending
2
2
In
●
●
●
ending 2:2 uplink configuration:
The original VMware VDS is VMware VDS1, and the new VMware VDS is VMware VDS2.
The two VMware VDS permanently handles all the traffic.
You can unassign uplink3 and uplink4 on the VMware VDS1 and then create a VMware VDS (VDS2) to use the related
ports.
● No extra ports are added.
● The same MTU configuration is used for all network traffic.
● Migrate the VMware vSAN or VMware vSphere vMotion traffic to a new VMware VDS.
Steps
1. To create the VMware VDS2 with two uplinks, perform the following:
a.
b.
c.
d.
e.
f.
g.
h.
i.
Log in to the VMware vSphere Web Client as an administrator.
Select Networking > DataCenter.
From the Actions menu, select Distributed Switch > New Distributed Switch.
Enter a name and click Next.
Select the same version as the existing VMware VDS and click Next.
Set the number of uplinks to 2 and click Next.
Click Finish.
Go to VMware VDS2 and from the Actions menu, select Settings > Edit Settings...
Open the Uplinks tab. For the unified name rule, change the Uplink 1 to uplink1, and Uplink 2 to uplink2 and
click OK.
2. To add existing VxRail nodes to VMware VDS2, perform the following:
a.
b.
c.
d.
e.
f.
g.
Log in to the VMware vSphere Web Client as an administrator.
Select Networking and go to the VMware VDS2.
From the Actions menu, select Add and Manage Hosts.
On the Select task page, select Add hosts and click NEXT.
On the Select hosts page, click New hosts. Select the hosts that are associated with the VMware VDS and click OK.
Click NEXT and go to Management physical adapters.
Click OK or NEXT for the rest of the pages and complete the configuration.
3. To create a port group for the VMware vSAN in the VMware VDS2, perform the following:
a.
b.
c.
d.
e.
f.
g.
Log in to the VMware vSphere Web Client as an administrator.
Select Networking and go to the VMware VDS2.
From the Actions menu, select Distributed Port Group > New Distributed Port Group and click NEXT.
From the Configure Settings page, define the same VLAN with the VMware vSAN port group as in the VMware VDS1.
Enable Customize default policies configuration and click NEXT > NEXT.
In the Teaming and failover step, adjust the uplink1 to Standby uplinks and click NEXT > NEXT.
Follow the wizard steps to complete the configuration.
4. To create a port group for the VMware vSphere vMotion in the VMware VDS2, perform the following:
a. Log in to the VMware vSphere Web Client as an administrator.
68
Manage network settings
b. Select Networking and go to the VMware VDS2.
c. From the Actions menu, select Distributed Port Group > New Distributed Port Group and click NEXT > NEXT.
d. From the Configure Settings page, define the same VLAN with the VMware vSphere vMotion port group as in the
VMware VDS1.
e. Enable Customize default policies configuration and click NEXT > NEXT.
f. In the Teaming and failover step, adjust the uplink2 to Standby uplinks and click NEXT > NEXT.
g. Follow the wizard steps to complete the configuration.
5. To unassign the uplink3 in the VMware VDS1, perform the following:
a.
b.
c.
d.
e.
f.
g.
h.
Log in to the VMware vSphere Web Client as an administrator.
Select Networking and go to the VMware VDS1.
From the Actions menu, select Add and Manage Hosts.
On the Select task page, select Manage host networking and click NEXT.
On the Select hosts page, select the hosts that are associated with the VMware VDS and click OK.
Click NEXT.
Select the physical VMNIC that has the uplink3 assigned and click Unassign adapter.
Click Next and complete the configuration.
6. To assign the released VMNIC to uplink1 in the VMware VDS2, perform the following:
a.
b.
c.
d.
e.
f.
g.
h.
i.
j.
Log in to the VMware vSphere Web Client as an administrator.
Select Networking and go to the VMware VDS2.
From the Actions menu, select Add and Manage Hosts.
On the Select task page, select Manage host networking and click Next.
On the Select hosts page, select the hosts that are associated with the VMware VDS and click OK.
Click NEXT.
On the Manage physical adapters page, select a physical NIC that is released from Step 5.
Click Assign uplink.
Select uplink1 and click OK.
Click NEXT > NEXT > FINISH.
7. To migrate the VMware vSAN-related VMkernel (typically VMk3) from the VMware VDS1 port group to the VMware VDS2
port group, perform the following:
a.
b.
c.
d.
e.
f.
g.
h.
i.
Log in to the VMware vSphere Web Client as an administrator.
Select Networking and go to the VMware VDS2.
From the Actions menu, select Add and Manage Hosts.
On the Select task page, select Manage host networking and click Next.
On the Select hosts page, select the hosts that are associated with the VMware VDS and click OK.
Click Next.
On the Manage physical adapters page, make no changes, and click Next.
Select the VMware vSAN VMkernel (typically VMk3) on each host and click Assign port group. Select the new
VMware vSAN port group that is created in Step 3 and click OK.
Click NEXT > NEXT > FINISH.
8. To migrate the VMware vSphere vMotion related VMkernel (typically VMk4) from the VMware VDS1 port group to the
VMware VDS2 port group.
a.
b.
c.
d.
e.
f.
g.
h.
i.
Log in to the VMware vSphere Web Client as an administrator.
Select Networking and go to the VMware VDS2.
From the Actions menu, select Add and Manage Hosts.
On the Select task page, select Manage host networking and click Next.
On the Select hosts page, select the hosts that are associated with the VMware VDS and click OK.
Click NEXT.
On the Manage physical adapters page, make no changes, and click Next.
Select the VMware vSphere vMotion VMkernel (typically VMk4) on each host and click Assign port group. Select
the new VMware vSphere vMotion port group that is created in Step 4 and click OK.
Click NEXT > NEXT > FINISH.
9. To unassign the uplink4 in the VMware VDS1, perform the following:
a. Repeat the operation for uplink3 in the VMware VDS1. See Step 5 for more details.
b. Select the VMNIC that has the uplink4 assigned and click Unassign adapter.
10. To assign the released VMNIC to uplink2 in the VMware VDS2, perform the following:
Manage network settings
69
a.
b.
c.
d.
e.
f.
g.
h.
i.
j.
Log in to the VMware vSphere Web Client as an administrator.
Select Networking and go to the VMware VDS2.
From the Actions menu, select Add and Manage Hosts.
On the Select task page, select Manage host networking and click Next.
On the Select hosts page, select the hosts that are associated with the VMware VDS and click OK.
Click NEXT.
On the Manage physical adapters page, select the physical NIC that is released from Step 9.
Click Assign uplink.
Select uplink2 and click OK.
Click NEXT > FINISH to finish.
11. On the Summary of Hosts and Clusters page, click Reset To Green to clear any alerts on the network redundancy that is
being lost.
Use case 2 - Modify the cluster uplink configuration
This use case results in handling the network traffic for two VMware VDS switches. Two extra ports are added for the VMware
vSAN and VMware vSphere vMotion traffic. A new VMware VDS2 uses new ports. This use case supports the same MTU
configuration for all the network traffic.
About this task
Initially, configure the VxRail cluster with two uplinks in a single VMware VDS.
After the conversion, configure the VxRail cluster with two VMware VDS with two uplink ports each.
Steps
1. Create the VMware VDS with two uplinks.
a. Perform Step 1 in Use case 1 - Enable uplinks.
b. Perform Step 2 in Use case 1 - Enable uplinks.
2. Configure the port groups to use the uplink1 and uplink2 as Active or Standby uplinks.
a. Perform Step 3 in Use case 1 - Enable uplinks.
b. Perform Step 4 in Use case 1 - Enable uplinks.
3. Assign the new VMNICs to uplink1 or uplink2 in the VMware VDS2.
a.
b.
c.
d.
e.
f.
g.
Log in to the VMware vSphere Web Client as an administrator.
Select Networking and go to VMware VDS2.
From the Actions menu, select Add and Manage Hosts.
On the Select task page, select Manage host networking and click Next.
On the Select hosts page, select the hosts that are associated with the VMware VDS. Click OK.
Click Next.
On the Manage physical adapters page, select the available physical NIC from the other switches or unclaimed list to
assign an uplink to the adapter.
h. Click Assign uplink.
i. Select uplink1 and click OK.
j. Repeat the operation for uplink2. On the Manage physical adapters page, select the available physical NIC from the
other switches or the unclaimed list to assign an uplink to the adapter.
k. Click Assign uplink.
l. Select uplink2 and click OK.
m. Click Next.
4. Migrate the VMware vSAN related VMkernel (typically VMk3) from the VMware VDS1 port group to the VMware VDS2 port
group by performing Step 7 in Use case 1 - Enable uplinks.
5. Migrate the VMware vSphere vMotion related VMkernel (typically VMk4) from the VMware VDS1 port group to the VMware
VDS2 port group by performing Step 8 in Use case 1 - Enable uplinks.
6. On the Summary of Hosts and Clusters page, click Reset To Green to clear any alerts on the network redundancy that is
being lost.
70
Manage network settings
Use case 3 - Modify the cluster uplink configuration
This use case results in handling the network traffic for two VMware VDS switches. Two extra ports are added for the VMware
vSAN and VMware vSphere vMotion traffic. A new VMware VDS2 uses new ports. This use case supports the same MTU
configuration for all the network traffic. The Management Network, VMware vSAN, and VMware vSphere vMotion traffic uses
different VMNIC ports. This case uses the same MTU configuration for all the network traffic. The VMware vSphere vMotion
traffic is assigned to two ports on the VMware VDS2.
About this task
Initially, configure the VxRail cluster with four uplink ports in the VMware VDS1.
After the conversion, configure the VxRail cluster with two VMware VDS. There are four uplinks in the VMware VDS1 and two
uplinks in the VMware VDS2.
Steps
1. Create a VMware VDS with two uplinks.
a. Perform Step 1 in Use case 1 - Enable uplinks.
b. Perform Step 2 in Use case 1 - Enable uplinks.
2. Configure port groups to use uplink1 and uplink2 as Active and Standby uplinks.
a. Perform Step 3 in Use case 1 - Enable uplinks.
b. Perform Step 4 in Use case 1 - Enable uplinks.
3. Assign the new VMNIC to the VMware VDS2 with uplink1 and uplink2 by performing Step 3 in Use case 2 - Modify the
cluster uplink configuration.
4. Migrate the VMware vSphere vMotion related VMkernel (typically VMk4) from the VMware VDS1 port group to VMware
VDS2 port group by performing Step 8 in Use case 1 - Enable uplinks.
5. On the Summary of Hosts and Clusters page, click Reset To Green to clear any alerts on the network redundancy that is
being lost.
See Configure Physical Network Adaptes on a VMware VDS for more details.
Migrate the satellite node to a VMware VDS
A satellite node is deployed with a VMware standard switch by default. Migrate the VMware standard switch to a VMware VDS
that manages the VMware vCenter Server instance.
Prerequisites
To set up the satellite node, you must:
● Verify that the VxRail management cluster is deployed.
● Verify that the satellite node is added into a folder that manages the VMware VDS.
About this task
This procedure applies to the VxRail cluster running the VxRail version 8.0.x and later. VxRail version 8.0.010 does not support
VMware vSAN ESA or satellite nodes.
This procedure is intended for Dell Technologies customers, employees, and partners who are authorized to work on a VxRail
cluster.
Capture the satellite node VMware standard switch settings
Capture the satellite node VMware standard switch settings to create the VMware VDS. The VMware VDS uses these same
settings to manage the VMware vCenter Server instance.
Steps
1. Log in to VMware vSphere Web Client as an administrator.
2. From the left-menu, select Networking.
Manage network settings
71
3. Select the Virtual switches tab and locate the VMware standard switch that supports the satellite node.
4. Click Edit Settings.
5. Identify and capture the MTU.
6. Identify and capture the VMNIC that is connected to the VMware VDS.
7. Identify and capture the NIC teaming policy.
8. Select the Port groups tab.
a. Identify and capture the VLAN that is assigned to the management network and VM network. The VLANs must be the
same.
b. Identify and capture any port groups and VLANs that are assigned for the guest networks or other management
networks with at least one active port.
9. Select the VMkernel NICs tab and capture the name of each VMkernel NIC and the name of the port group assignment.
10. Exit the VMware ESXi session.
Create the VMware VDS for the satellite node
Create a VMware VDS for the satellite node.
Steps
1. Log in to the VMware vSphere Web Client of the management cluster as an administrator.
2. Select Networking.
3. From the vSphere Client menu, select Inventory.
4. Select the data center that contains the satellite node folder.
5. Right-click the data center and select Distributed Switch > New Distributed Switch.
6. Enter a name for the VMware VDS and click NEXT.
7. Select the latest version that is compatible with the VMware ESXi version on the satellite node and click NEXT.
72
Manage network settings
8. Set the number of uplinks to match the number of uplinks on the satellite node VMware standard switch.
9. Click NEXT and then FINISH.
Set the MTU on the VMware VDS
Configure the MTU value on the VMware VDS.
Steps
1. In the VMware vSphere Web Client , select the new VMware VDS.
2. Right-click the VMware VDS and select Settings > Edit Settings.
3. In the Edit Settings window, select Advanced.
4. Set the MTU to match the satellite node VMware standard switch and click OK.
Create the VMware VDS port groups for the satellite node
Create a VMware VDS port group on the VMware VDS that supports satellite node networking. Repeat these steps to add the
new port group to the VMware VDS.
Steps
1. Locate the first port group that was captured on the satellite node on the VMware standard switch.
2. In the VMware vSphere Web Client, select the new VMware VDS.
3. Right-click the data center and select Distributed Switch > New Distributed Switch.
4. Under Name and Location, perform the following:
a. The distributed port group name can be the same or correlate with the port group on the satellite node VMware standard
switch. Enter the distributed port group name.
b. Click NEXT.
5. Under Configure Settings, to set the properties of the new port group, perform the following:
a. For the VLAN Type, select VLAN.
b. Enter the VLAN ID.
The VLAN ID must match with the port group VLAN ID on the satellite node VMware standard switch.
c. Select Customize default policies configuration.
6. Under Teaming and Failover, set the policy that matches the settings that are captured on the satellite node VMware
standard switch.
7. Proceed through the remaining screens and click FINISH.
Manage network settings
73
8. Select the next port group that is captured from the satellite node VMware standard switch and repeat these steps.
Migrate the satellite node to the new VMware VDS
Add the satellite node into the new VMware VDS.
Steps
1. Log in to the VMware vSphere Web Client as an administrator.
2. Select VxRail-Datacenter > VMware HCIA Distributed Switch.
3. Right-click the satellite node for the VMware VDS and select Add Hosts....
4. From the Add hosts wizard, enter host information and click ADD HOST.
5. Under Select hosts, select the satellite node.
6. Under Manage VMkernel adapters, to migrate the VMkernel from the satellite node VMware standard switch to the port
groups on the VMware vCenter Server VDS, perform the following:
a. Select the first VMkernel to assign to a port group.
b. Click ASSIGN PORT GROUP.
c. Select the port group from the drop-down.
d. Click ASSIGN.
e. Repeat these steps for the next VMkernel on the list.
7. Under Migrate VM Networking, to migrate the VMs to the new port group on the VMware VDS, perform the following:
a. Select the first VM.
b. Migrate the NIC from the source port group on the satellite node VMware standard switch to the new port group on the
VMware VDS.
c. Repeat these steps for the remaining VMs in the list.
8. Click FINISH.
Next steps
Verify the VMware VDS.
1. Connect to the VMware vSphere Web Client.
2. Select Home > Hosts and Clusters.
3. Select Configure > Virtual Switches.
4. Select the satellite node and verify the new VMware VDS.
74
Manage network settings
Modify the VMware VDS port group teaming and
failover policy
Modify the port group teaming and failover policy for the VMware VDS that supports VxRail networks.
About this task
This procedure applies to the VxRail cluster running the VxRail 8.0.x and later.
This procedure is intended for Dell Technologies customers, employees, and partners who are authorized to work on a VxRail
cluster.
Steps
1. To connect to the VMware VDS, perform the following:
a. Log in to the VMware vSphere Web Client as an administrator.
b. Select Networking.
c. Select the VMware VDS that supports the VxRail cluster that you plan to modify.
2. To identify the number of uplinks that support the VMware VDS, perform the following:
a. From the Home screen, select the Actions drop-down menu.
b. Select Settings > Edit Settings and click the Uplinks tab to view the number of uplinks that are assigned to the
VMware VDS.
The options for the failover settings are based on the number of uplinks.
3. To configure to the port group teaming and failover policy, perform the following:
a. From the VMware VDS, select the port group to modify.
b. Select Configure and from the left-menu, click Properties > EDIT.
4. From the left-menu, select Teaming and failover to view the existing port group policy.
5. Select the Load balancing policy that meets the requirements for the network traffic on the port group.
Load-balancing option
Description
Supported
Route based on originating
virtual port
Forwards the network traffic through the originating uplink.
There is no load balance that is based on the network traffic.
Yes
Use explicit failover order
Use the highest order uplink that passes the failover
detection. There is no load balance that is based on the
network traffic.
Yes
Route based on source MAC
hash
Uplink is selected based on the VM MAC address. There is no
load balance that is based on the network traffic.
Yes
Route based on physical NIC
load
Monitor the network traffic and adjust the overloaded uplinks
by moving the network traffic to another uplink.
Yes
Route based on IP hash
Dependency on the logical link setting of the physical switch
port adapters is not supported in VxRail.
No
6. Select the failover order for teaming and failover policy.
a. Select the table based on the number of uplinks that are configured on the VxRail VMware VDS.
b. Use the name of the VMware VDS port group to map the corresponding row in the selected table.
● The second column displays the supported settings where the uplinks are configured as active/active.
● The fourth column displays the supported settings where the uplinks are configured as active/standby.
The following table lists the supported failover options for the VxRail port groups with two configured uplink ports:
VMware VDS port group
Active/Active
Active/Standby
Management Network
uplink1
uplink2
uplink1
uplink2
VMware vCenter Server
uplink1
uplink2
uplink1
uplink2
Manage network settings
75
VMware VDS port group
Active/Active
VMware vSAN Network
uplink2
uplink1
uplink2
uplink1
vMotion uplink1
uplink2
uplink1
uplink2
VMware
Network
vSphere
Active/Standby
VxRail Management Network
uplink1
uplink2
uplink1
uplink2
VMware Guest Network
uplink1
uplink2
uplink1
uplink2
The following table lists the supported failover options for the VxRail port groups with four configured uplink ports:
VMware VDS port group
Active/Active
Active/Standby
Management Network
uplink2
uplink1
uplink2
uplink1
VMware vCenter Server
uplink1
uplink2
uplink1
uplink2
VMware vSAN Network
uplink3
uplink4
uplink3
uplink4
VMware vSphere vMotion
Network
uplink4
uplink3
uplink4
uplink3
VxRail Management Network
uplink2
uplink1
uplink2
uplink1
VMware Guest Network
uplink1
uplink2
uplink1
uplink2
You cannot configure the unused uplinks into the failover order setting.
7. To configure an active/active failover order, perform the following:
a. Select the uplink under Standby uplinks.
b. Use the UP arrow to move the uplink to Active uplinks.
8. To configure an active/standby failover order, perform the following:
a. Under the Active uplinks, select the uplink that is supported to be in standby mode per the supported failover order for
this port group.
b. Use the DOWN arrow to go to the uplink in the Standby uplinks setting.
9. To complete the policy update, click OK.
Optimize cross-site traffic for VxRail
You can use telemetry settings to collect the system running data that is sent back using remote support connectivity to
provide advance system health status. Telemetry settings allow you to collect the system running data such as performance and
alarms. The data is sent back using remote support connectivity for analysis to provide the advance system health status.
This procedure applies to the VxRail cluster running the VxRail version 8.0.x and later.
This procedure is intended for Dell Technologies customers, employees, and partners who are authorized to work on a VxRail
cluster.
Remote-office branch-office sites
Remote-office branch-office (ROBO) sites deploy a centralized VMware vCenter Server with limited bandwidth between the
cluster and the VMware vCenter Server. If the life cycle management (LCM) bundle that is distributed from Repo to the cluster
is 4 GB, limited bandwidth is consumed. The process becomes time consuming and may trigger a distribution failure which
causes network congestion.
The customer can provide a jump box service to store the bundle and ROBO cluster. The LCM is locally triggered from the UI to
upload the bundle. To centrally perform LCM for each ROBO site, use a jump box to store the upgrade bundle, and trigger the
LCM locally to decrease the traffic.
The following figure shows a simple topology with a centrally shared VMware vCenter Server:
76
Manage network settings
Telemetry settings
The following table describes the data that is collected and the amount of daily traffic between VxRail Manager and the VMware
vCenter Server:
Telemetry level
Daily traffic between the VxRail Manager and
the VMware vCenter Server
LIGHT
11 MB
BASIC
64 MB
ADVANCED
75 MB
NONE
0 MB
NOTE: Telemetry settings are different on the API as shown in the table.
You can manage telemetry settings using the VxRail onboard API, client URL (curl) commands, or through VxRail Manager. To
modify telemetry settings using VxRail onboard API, verify access to:
● Verify that you have access to the REST API.
● Verify the IP address for VxRail Manager onboard API.
Limitations for a ROBO environment T1
The following limitations apply for a ROBO environment with a T1 line (network speed of 1.544 Mbps):
● A ROBO environment between the VMware vCenter Server and the VxRail clusters is not supported. VMware vCenter
Server log details cannot be collected when using the ROBO environment between the VMware vCenter Server and the
VxRail clusters.
● The backup and restore script consumes extra bandwidth between the VMware vCenter Server and the VxRail clusters. You
can temporarily use VMware snapshots instead of a backup.
Manage network settings
77
Configure telemetry settings using curl commands
Configure or disable telemetry settings client URL (curl) commands.
Prerequisites
Verify that you have the following:
● Username and password for the curl command
● Four 14 GB and 10 GB R640 nodes with 4*10 GB NIC
● VxRail cluster with VMware vCenter Server
● Twenty-five running workload VMs, without I/O
● 10+ alarms
● Remote support connectivity enabled
● One market application
Steps
1. To view the telemetry setting, enter:
curl -k -H "Content-Type: application/json" -X GET --user username:password https://
<vxrailmanager_ipaddr>/rest/vxm/v1/telemetry/tier
{‘level’:’BASIC’}
2. To modify the telemetry level, using the POST request method, enter:
curl -k -X POST -H "Content-type: application/json" -d '{"level":"BASIC"}' --user
management:tell1103@ https://<vxrailmanager_ipaddr>/rest/vxm/v1/telemetry/tier
-k: turn off verification of the certificate
-d: data
-X: http method
-H: header
--user: credential followed, spilit by “:”, here management:tell1103@ is just for
example, you need input based on your setup.
Sample
Request Body:
{
‘level’:’BASIC’
}
3. To disable telemetry, set the level to NONE.
Configure telemetry settings from VxRail Manager
Select telemetry settings to define the level of data that is collected for your VxRail environment.
Prerequisites
Verify that you have the following:
● Four 14 GB and 10 GB R640 nodes with 4*10 GB NIC
● VxRail cluster with VMware vCenter Server
● Twenty-five running workload VMs, without I/O
● 10+ alarms
● Remote support connectivity enabled
● One market application
78
Manage network settings
Steps
1. Log in to the VMware vSphere Web Client as an administrator.
2. Select the cluster and from the Configure left-menu, select VxRail > Support.
3. If the remote support connectivity is enabled, click Edit > Edit Customer Improvement to redirect you to the Customer
Improvement Program page.
4. Select the telemetry setting and click NEXT > FINISH.
Manage network settings
79
5
Manage VxRail cluster settings
Use the following links to manage cluster settings.
To configure external storage for the dynamic node cluster, see Configure External Storage of the Dynamic Node Cluster.
Use the following links to manage cluster settings:
● Change the VxRail Cluster EVC Mode
● Fault Tolerance on VxRail
● Managing the VMware vCenter Server for AD authentication
● Join or leave an Active Directory Domain
Configure external storage for standard clusters
After installing two VMware vSAN VxRail clusters, manually mount the remote VMware vSAN of another VMware vSAN VxRail
cluster.
Prerequisites
Verify that two VMware vSAN clusters are deployed in the same VMware data center.
1. Log in to the VMware vSphere Web Client with administrator privileges.
2. From the Inventory icon, select the cluster and click the Configure tab.
3. Under Remote Datastores, verify that two VMware vSAN clusters are deployed in the same VMware data center.
About this task
This procedure applies to VxRail 7.0.480 and later and VxRail 8.0.200 and later.
To ensure connectivity in L3 topology, verify that the VMware vSAN override gateway is configured for each server cluster
node.
If the server cluster is running a VxRail version earlier than 7.0.480 (VMware vSphere 7.x) or 8.0.200 (VMware vSphere 8.x),
ensure that there is a static route on the server cluster nodes to reach the VMware vSAN network of the client cluster.
This procedure is intended for customers and Dell Technologies service providers who are authorized to work on a VxRail
cluster.
Steps
1. To ensure the VMware vSAN override gateway is set for the server cluster nodes if both clusters are on an L3 network,
perform one the following:
● If the server cluster is running VxRail 7.0.480 and later or VxRail 8.0.200 or later, go to step 2.
● If the server cluster is running an earlier version than VxRail 7.0.480 or VxRail 8.0.200, go to step 3.
2. From the Inventory icon, select the VMware VDS and click the Configure tab, and then select Topology.
a. Select the VMware vSAN traffic setting on each node and click the edit icon. On the Edit Settings window, check
Override default gateway for this adapter on IPv4 and click OK.
b. If the override gateway on the server cluster is not configured for each node, select the VMware vSAN port group.
c. Select the hosts and click Edit Settings to configure the VMkernel adapter.
d. Under IPv4 settings, click Use IPv4 settings and then enable and configure the default gateway.
80
Manage VxRail cluster settings
e. On the Ready to complete window, click FINISH.
f. To configure the IPv4 static route, enter:
esxcli networkip route ipv4 add -n <hci_mesh_vsan_cluster_subnet>/<netmask_length> -g
<server_cluster_vsan_gateway>
3. For versions earlier than VxRail 7.0.480 or VxRail 8.0.200 only, to set a static route on the server cluster nodes to reach the
VMware vSAN network of the client cluster, perform the following:
a. Select the configured node.
b. Click the Configure tab and select System > Services.
c. Select SSH or ESXi Shell and click START.
If the SSH service is enabled, you can log in to the configured node CLI using the SSH client. If the VMware ESXi Shell
service is enabled, you can log in to the configured node CLI using DCUI with Alt and F1.
d. Log in to the configured node as root.
e. To check the IPv4 static route, enter: esxcli network ip route ipv4 list
4. On the Ready to complete page, click Finish.
5. To mount the remote VMware vSAN data store on another VMware vSAN cluster, perform the following:
a. Select a cluster, then click the Configure tab.
b. Select Remote Datastores and click MOUNT REMOTE DATASTORE.
c. On the Mount Remote Datastore window, select the data store and click NEXT.
d. On the Check compatibility window, click FINISH.
Manage VxRail cluster settings
81
Convert one VMware VDS with two uplinks to two
VMware VDS with two uplinks
Two VMware VDS permanently to handle all traffic. Two additional ports are added for VMware vSAN or VMware vSphere
vMotion traffic to create a new VDS2 to use these ports. The same MTU configuration is used for all traffic during the
conversion procedure.
About this task
This procedure uses the same tasks as Convert one VMware VDS with four uplinks to two VMware VDS with two uplinks with
few modifcations.
Steps
1. Use the following table to perform the first four tasks:
Procedure
Entry
Create a VMware VDS and assign two uplinks
Same entries as previous procedure
Add existing VxRail nodes to VDS2
Same entries as previous procedure
Create the port group for VMware vSAN in VDS2
Set uplink1 status as Active/Standby
Create port group for VMware vSphere vMotion in VDS2
Set uplink2 status as Active/Standby
2. To assign a new VMNIC to uplink1/uplink in VDS2, perform the following:
a.
b.
c.
d.
e.
f.
g.
h.
i.
From the VMware vSphere Web Client, log in as administrator.
Under the Inventory icon on in the top left menu bar, select a data center.
Click the Networks tab.
Select Distributed Switches to view VS2.
From Actions menu, select Add and Manage Hosts.
From Select task page, select Manage host networking and click NEXT.
From Select hosts page, select Attached hosts and choose hosts that are linked to the distributed switch.
Click OK and then click Next.
Select an active physical NIC from on other switches/unclaimed list from the Manage physical adapters page to
assign an uplink to the adapter.
j. Click Assign uplink.
k. Select uplink1 and click OK.
l. Repeat step f for assigning new VMNIC to uplink 2.
m. Click Assign uplink.
n. Select uplink2 and click OK and Next.
3. Use the table to perform the following tasks:
82
Procedure
Entry
Migrate the VMware vSAN VMkernel from VDS1 to VDS2
port groups
Same entries as previous procedure
Migrate the VMware vMotion VMkernel from VDS1 to VDS2
port groups
Same entries as previous procedure
Manage VxRail cluster settings
Convert one VMware VDS to two VMware VDS
Convert one VMware VDS to two VMware VDS in Day2 for VxRail traffic. You can change the default NIC layout of VxRail
nodes for greater flexibility and higher availability through a NIC-level redundancy solution.
Prerequisites
Before you convert a VMware VDS, perform the following:
● Verify that the node that is configured has a PCIE NIC with the same speed.
● Validate that all network VLAN and MTU configurations are properly set on the physical switches before making any network
profile changes.
CAUTION: Misconfiguration may lead to data unavailability or loss.
● Confirm that the new uplinks from newly configured ports comply with existing VLAN and MTU configurations.
● Verify that the cluster is in healthy state.
● Configure remote VMware vSAN cluster connection before VMware VDS configuration in dynamic node cluster with
VSAN_HCI_MESH storage type.
About this task
You can convert a customer-managed VMware VDS or VxRail-managed VMware VDS for a standard cluster deployment.
This procedure is intended for Dell Technologies customers, employees, and partners who are authorized to work on a VxRail
cluster. This procedure applies to VxRail versions 7.0.240 or 8.0 and later.
See VxRail 7.x Support Matrix or VxRail 8.x Support Matrix for a list of supported versions.
Identify the port groups
Identify the port groups to switch from the default NIC layout of VxRail nodes to a more flexible and highly available NIC level
redundancy solution.
About this task
Default names are used to identify port groups for Management, VxRail discovery, vSAN, and vMotion.
The following table describes the port group types, default names, and port groups:
Port group type
Port group default name
VMkernel port group
Management
Management Network-xxxxxx
vmk2
vSAN
Virtual SAN-xxxxxxxx
vmk3
vMotion
vSphere vMotion-xxxxxxxxxxx
vmk4
VxRail discovery
VxRail Management-xxxxxx
vmk0
Steps
1. Log in to the VMware vSphere Web Client as an administrator.
2. Under the Inventory icon on in the top-left menu bar, select a node.
3. Select the Configure tab.
4. Click Networking > VMkernel adapters.
Manage VxRail cluster settings
83
5. In the VMkernel adapters window, under Network Label, view the port group name.
Convert one VMware VDS with four uplinks to two
VMware VDS with four uplinks/two uplinks
Allocate different VMNIC ports for VMware management, VMware VSAN, and VMware vSphere vMotion traffic separation. The
same MTU configuration is used for all traffic during the conversion procedure. Separate the VMware vSphere vMotion to VDS2
with two extra ports.
About this task
This procedure uses the same tasks as Convert one VMware VDS with four uplinks to two VMware VDS with two uplinks with
few modifcations.
Steps
1. Use the following table to perform the first four tasks:
Procedure
Entry
Create a VMware VDS and assign two uplinks
Same entries as previous procedure
Add existing VxRail nodes to VDS2
Same entries as previous procedure
Create the port group for VMware vSAN in VDS2
Set uplink1 status as Active/Standby
Create port group for VMware vSphere vMotion in VDS2
Set uplink2 status as Active/Standby
2. To assign a new VMNIC to uplink1/uplink in VDS2, perform the following:
a.
b.
c.
d.
e.
f.
g.
h.
i.
j.
84
From the VMware vSphere Web Client, log in as administrator.
Under the Inventory icon on in the top left menu bar, select a data center.
Click the Networks tab.
Select Distributed Switches to view VS2.
From Actions menu, select Add and Manage Hosts.
From Select task page, select Manage host networking and click NEXT.
From Select hosts page, select Attached hosts and choose hosts that are linked to the distributed switch.
Click OK and then click Next.
Select an active physical NIC from on other switches/unclaimed list from the Manage physical adapters page to
assign an uplink to the adapter.
Click Assign uplink.
Manage VxRail cluster settings
k.
l.
m.
n.
Select uplink1 and click OK.
Repeat step f for assigning new VMNIC to uplink 2.
Click Assign uplink.
Select uplink2 and click OK and Next.
3. Use the table to perform the following task:
Procedure
Entry
Migrate the VMware vMotion VMkernel from VDS1 to VDS2
port groups
Same entries as previous procedure
VxRail Physical View page does not support PCIE adapter display. The PCIE port display information missing is a known
issue.
See Configure Physical Network Adapters on a vSphere Distributed Switch for more information.
Convert one VMware VDS with four uplinks to two
VMware VDS with two uplinks
Two VMware VDS permanently handle all traffic. The conversion procedure only supports using the same MTU configuration for
all traffic.
Create a VMware VDS and assign two uplinks
Create the VMware VDS as VDS2 and set the uplinks to 2. Edit the uplinks to be uplink1 and uplink2.
Steps
1. From the VMware vSphere Web Client, log in as an administrator
2. Under the Inventory icon on in the top-left menu bar, and select a data center.
3. Select the Networks tab.
4. Select Distributed Switch.
5. Under the Actions menu, select Distributed Switch > New Distributed Switch.
6. Enter the name and location and click Next.
7. Select the same version of the existing VMware VDS and click Next.
8. Set the number of uplinks to 2 and click Next.
9. Review settings and click FINISH.
10. From the left menu, select the new VMware VDS and click the Actions menu.
11. Select Settings > Edit Settings....
12. Go to the Uplinks tab and modify Uplink 1 to uplink1 and Uplink 2 to uplink2 to adhere to the unified name rule
and click OK.
Add existing VxRail nodes to VDS2
Steps
1. From the VMware vSphere Web Client, log in as an administrator.
2. Select the data center and click Networks.
3. Select Distributed Switches to view VS2.
4. From Actions menu, select Add Host to launch the wizard.
5. From Select task page, select Add hosts and click NEXT.
6. From Select hosts page, select New hosts and choose the associated hosts to add the VxRail nodes to the VDS2
distributed switch.
7. Click OK.
Manage VxRail cluster settings
85
8. Click Next to go to the management physical adapters.
9. Click OK or Next.
Create the port group for VMware vSAN in VDS2
Steps
1. From the VMware vSphere Web Client, log in as an administrator.
2. Under the Inventory icon on in the top left menu bar, select a data center.
3. Click Networks tab.
4. Select Distributed Switches to view VS2.
5. From Actions menu, select Distributed Switch > New Distributed Switch.
6. Create the VMware vLAN in VDS1 and assign it to the VMware vSAN port group from the Configuration Settings step.
7. Click NEXT after verifying Customize default policies configuration.
8. Adjust uplink1 to standby uplinks and click NEXT in the Teaming and failover step.
9. Follow the instructions on the screen to complete the remaining steps and finish the configuration.
Create port group for VMware vSphere vMotion in VDS2
Steps
1. From the VMware vSphere Web Client, log in as an administrator.
2. Under the Inventory icon on in the top left menu bar, select a data center.
3. Click Networks tab.
4. Select Distributed Switches to view VS2.
5. From Actions menu, select Distributed Switch > New Distributed Switch.
6. Create the vLAN in VDS1 and assign it to the VMware vSAN port group from the Configuration Settings step.
7. Click NEXT after verifying the Customize default policies configuration.
8. Adjust uplink2 to standby uplinks and click NEXT in the Teaming and failover step.
9. Follow the instructions on the screen to complete the remaining steps and finish the configuration.
Unassign uplink3 in VDS1
Steps
1. From the VMware vSphere Web Client, log in as an administrator.
2. Under the Inventory icon on in the top left menu bar, select a data center.
3. Click Networks tab.
4. Select Distributed Switches to view VS1.
5. From Actions menu, select Add and Manage Hosts.
6. From Select task page, select Manage host networking and click NEXT.
7. From Select hosts page, select Attached hosts and choose hosts that are linked to the VMware VDS.
8. Click OK and then click Next.
9. Select the VMNIC assigned to uplink3 and click Unassign adapter.
10. Click Next to complete the configuration.
Assign the released VMNIC to uplink1 in VDS2
Steps
1. From the VMware vSphere Web Client, log in as an administrator.
2. Under the Inventory icon on in the top left menu bar, select a data center.
86
Manage VxRail cluster settings
3. Click Networks tab.
4. Select Distributed Switches to view VS1.
5. From Actions menu, select Add and Manage Hosts.
6. From Select task page, select Manage host networking and click NEXT.
7. From Select hosts page, select Attached hosts and choose hosts that are linked to the VMware VDS.
8. Click OK and then click Next.
9. Select an active physical NIC released in Unassign uplink3 in VDS1 on the Manage physical adapters page.
10. Click Assign uplink.
11. Select uplink1 and click OK.
12. Click Next twice to complete the process.
Migrate the VMware vSAN VMkernel from VDS1 to VDS2 port
groups
The VMware vSAN VMkernel is represented as vmk3.
Steps
1. From the VMware vSphere Web Client, log in as an administrator.
2. Under the Inventory icon on in the top left menu bar, select a data center.
3. Click Networks tab.
4. Select Distributed Switches to view VS1.
5. From Actions menu, select Add and Manage Hosts.
6. From Select task page, select Manage host networking and click NEXT.
7. From Select hosts page, select Attached hosts and choose hosts that are linked to the distributed switch.
8. Click OK and then click Next.
9. Click Next without making any changes on the Manage physical adapters page.
10. Click Assign port group after selecting VMware vSAN vmk3 on each host.
11. Select the newly created VMware vSAN port group and click OK.
12. Click Next twice and then click Finish.
Migrate the VMware vMotion VMkernel from VDS1 to VDS2 port
groups
The VMware vMotion VMkernel is represented as vmk4.
Steps
1. From the VMware vSphere Web Client, log in as an administrator.
2. Under the Inventory icon on in the top left menu bar, select a data center.
3. Click Networks tab.
4. Select Distributed Switches to view VS1.
5. From Actions menu, select Add and Manage Hosts.
6. From Select task page, select Manage host networking and click NEXT.
7. From Select hosts page, select Attached hosts and choose hosts that are linked to the distributed switch.
8. Click OK and then click Next.
9. Click Next without making any changes on the Manage physical adapters page.
10. Click Assign port group after selecting VMware vSAN vmk4 on each host. Select the newly created VMware vSAN port
group from Create the port group for VMware vSAN in VDS2 and click OK.
11. Click Next twice and then click Finish.
Manage VxRail cluster settings
87
Unassign uplink4 in VDS1
Steps
1. From the VMware vSphere Web Client, click Networking and go to VDS1.
2. From Actions menu, select Add and Manage Hosts.
3. From Select task page, select Manage host networking and click NEXT.
4. From Select hosts page, select Attached hosts and choose hosts that are linked to the distributed switch.
5. Click OK and then click Next.
6. Select the VMNIC assigned to uplink4 and click Unassign adapter.
7. Click Next to complete the configuration.
Assign the released VMNIC to uplink2 in VDS2
Steps
1. From the VMware vSphere Web Client, log in as an administrator.
2. Under the Inventory icon on in the top left menu bar, select a data center.
3. Click Networks tab.
4. Select Distributed Switches to view VS1.
5. From Actions menu, select Add and Manage Hosts.
6. From Select task page, select Manage host networking and click NEXT.
7. From Select hosts page, select Attached hosts and choose hosts that are linked to the VMware VDS.
8. Click OK and then click Next.
9. Select an active physical NIC released in Unassign uplink4 in VDS1 on the Manage physical adapters page.
10. Click Assign uplink.
11. Select uplink2 and click OK.
12. Click Next twice to complete the process.
Next steps
The summary of Hosts and Clusters page displays alerts for Network uplink redundancy loss in the reconfigured nodes. Click
Reset to Green to skip the alert.
Enable DPU offloads on VxRail
Enable DPU offloads on VxRail.
Prerequisites
● Do not involve the DPU NICs in the Day 1 bring up.
● Create the VMware VDS in the Day 2 task.
● Use one of these three models V670F, P670N, and E660F to build your VxRail cluster.
About this task
VxRail supports Pensando and BF-2 NVIDIA DPUs.
This procedure applies to the VxRail cluster running the VMware vSphere version 8.0.x and the VxRail version 8.0.010.
This procedure is intended for Dell Technologies customers, employees, and partners who are authorized to work on a VxRail
cluster.
88
Manage VxRail cluster settings
Enable the DPU offload after Day1 VxRail deployment
Enable the DPU offload after the Day1 bring up.
Steps
1. On the Physical adapters page, verify the DPU Backed column is marked on DPU adapters.
2. Select Networking > Datacenter.
3. From the Actions menu, select Distributed Switch > New Distributed Switch.
a.
b.
c.
d.
From the left-menu, click Name and location and enter the details and click Next.
From the left-menu, click Select version and select the VMware VDS version as 8.0.0.
From the left-menu, click Configure settings and select the associated DPU vendor (Pensando or BF-2 NVIDIA).
Create the distributed virtual port group (DVPG) and manage the teaming policy if needed.
4. Right-click the DPU-VDS and select Add and Manage Hosts.
a. On the Select task window, select Add hosts and click NEXT.
b. On the Select hosts window, select all the compatible hosts and click NEXT.
c. On the Manage physical adapters window, select the physical adapters from the drop-down menu and assign to the
uplinks. Click NEXT.
You can use only the compatible DPU adapters.
d. OPTIONAL: Assign the VMkernel adapters to the specified DVPG.
e. If you are not using the DPVG, from the Migrate VM Networking window, click NEXT.
f. On the Ready to complete window, click FINISH.
Manage VxRail cluster settings
89
The VMware VDS is deployed and configured so that the VxRail is prepared to support the DPU offload.
NOTE: The VxRail Appliance nodes should be integrated with NSX to leverage any network offload functionality.
Add a VxRail node
Add a node only for a VxRail that is equipped with a Pensando and NVIDIA DPUs.
Prerequisites
●
●
●
●
Obtain the access to the management system from the user to communicate with the VxRail.
Ensure that the VxRail node that you add is compatible with the VxRail version 8.0.010.
Ensure that you have the compatible DPUs to add a node.
Ensure that the node you add is identical.
Steps
1. Log in to the VMware vSphere Web Client as administrator.
2. Select Hosts and Clusters > VxRail-DataCenter > VxRail-Virtual-SAN.
3. From the Configure menu, select VxRail > Health Monitoring and verify that the Health Monitoring Status is set to
Enable.
4. Select VxRail > Hosts.
5. Click ADD.
● If the new node version matches the cluster version, select the host. To discover the VxRail hosts by Loudmouth mode,
configure the ToR switches and power on the hosts.
● If the new node version is lower than the cluster version and the node is compatible, add the new node to the cluster.
The new node is upgraded to the cluster level during the node addition.
● If the new node is not compatible, upgrade the corresponding sub component, or downgrade before you add the node to
the VxRail cluster.
● If there are no new hosts that is found, and you want to add a node using the IP address and credentials, click ADD.
6. To add the node manually, in the Add Hosts screen, enter the ESXi IP Address and the ESXi Root Password.
7. Click VALIDATE.
8. Click ADD.
9. If you are using host discovery to add a node, in the Add VxRail Hosts window, select the nodes that you want to add to
your VxRail cluster and click NEXT to configure new nodes.
NOTE: You can add a maximum of six nodes at a time.
10. In the vCenter User Credentials window, enter the VMware vCenter Server user credentials. Click NEXT.
11. In the NIC Configuration window, select a configuration, and select NICs and VMNICs. Click NEXT.
Select the proper NIC configuration and define the NIC-mapping configuration plan for the new hosts.
The default NIC configuration is from the node that you configured first in the VxRail cluster. The default values of the
VMNIC for the new nodes must align with the selected NIC configuration.
Default values must satisfy the common configuration requirement.
NOTE: If the VxRail cluster uses an external DNS server, all the nodes that are added to the VxRail cluster must have
the DNS hostname and IP address lookup records.
12. In the Host Settings window, enter the ESXi Host Configuration settings for the hosts and click NEXT.
13. OPTIONAL: In the Host Location window, to customize the host location, enter the Rack Name, Rack Position, and click
NEXT.
14. In the Network Settings window, enter the VMware vSAN IPv4 Address and VMware vSphere vMotion IPv4 Address.
Click NEXT.
NOTE: Dynamic node cluster with a fiber channel array does not have the VMware vSAN field sets.
15. In the Validate window, review the details and click VALIDATE. Click BACK to make any changes.
90
Manage VxRail cluster settings
VxRail validates the configuration details and if the validation passes, a success message appears on the screen.
16. In the Validate window, select Yes to put the hosts in maintenance mode and click FINISH.
NOTE: You must select Put Hosts in Maintenance Mode option to add the nodes to VCF on a VxRail environment.
17. Monitor the progress of each host that is added to the VxRail cluster.
18. Once the expansion progress is complete, a success message appears. If a supported lower version of the node is added, the
node gets upgraded to the cluster level.
Remove VxRail nodes
Remove nodes to decommission the older generation VxRail nodes and seamlessly migrate them to the new generation VxRail.
This procedure applies to the VxRail cluster running the VxRail version 8.0.010.
This procedure is intended for Dell Technologies customers, employees, and partners who are authorized to work on a VxRail
cluster.
CAUTION: You cannot use this task to replace a node. Node removal does not destroy the VxRail cluster.
Prerequisites
●
●
●
●
Disable the remote support connectivity, if enabled.
Verify that the VxRail cluster is in a healthy state.
Add new nodes into the cluster before running the node removal procedure to avoid any capacity or node limitations.
Verify that the VxRail cluster has enough nodes remaining after the node removal to support the current Failure to Tolerate
(FTT) setting:
VMware vSAN RAID and FTT
Minimum nodes
RAID 1, FTT = 1
4
RAID 1, FTT = 2
6
RAID 5, FTT = 1 (For All flash VxRail only)
5
RAID 6, FTT = 2 (For All flash VxRail only)
7
Verify the VxRail cluster health
Verify the VxRail cluster health status.
Steps
1. Log in to the VMware vSphere Web Client as an administrator.
2. Select a cluster and click the Monitor tab.
3. Select vSAN > Skyline Health.
4. If alarms display, acknowledge the Reset to Green at the node and cluster levels before you remove the node.
Verify the capacity, CPU, and memory requirements
Before removing the node, verify that the capacity, CPU, and memory are sufficient to allow the VxRail cluster to continue
running without any issue.
About this task
If the VMware vSAN used capacity percentage is over 80 percent, do not remove the node as it may lead to the VMware vSAN
performance issue.
Manage VxRail cluster settings
91
Use the following formula to determine whether cluster requirements can be met after the node removal: VSAN used
Capacity % = used total / (current capacity - capacity to be removed)
Steps
1. To view capacity for the cluster, log in to the VMware vSphere Web Client as administrator, and perform the following:
a. Under the Inventory icon, select the VMware vSAN cluster and click the Monitor tab.
b. Select vSAN > Capacity.
2. To check the impact of data migration on a node, perform the following:
a. Select vSAN > Data Migration Pre-check.
b. From the SELECT OBJECT drop-down, select the host.
c. From the vSAN data migration drop-down, select Full data migration and click PRE-CHECK.
3. To view disk capacity, perform the following:
a. Select the VMware vSAN cluster and click the Configure tab.
b. Select vSAN > Disk Management to view capacity.
Use the following formulas to compute percentage used:
CPU_used_% = Consumed_Cluster_CPU /(CPU_capacity - Plan_to_Remove_CPU_sum)
Memory_used_% = Consumed_Cluster_Memory /(Memory_capacity - Plan_to_Remove_Memory_sum)
4. To view to view the CPU and memory overview, perform the following:
a. Select the VMware vSAN cluster and click Monitor tab.
b. Select Resource Allocation > Utilization.
5. To check the CPU and memory resources on a node, perform the following: click the node and select Hardware.
a. Select the node and click the Summary tab.
b. View the Hardware window for CPU, memory, Virtual Flash Resource, Networking, and Storage.
Remove the node
Place the node into maintenance mode before you remove the node.
Prerequisites
Before you remove the node, perform the following steps to place the node in to maintenance mode:
1. Log in to the VMware vSphere Web Client as an administrator.
2. Under the Inventory icon, right-click host that you want to remove and select Maintenance Mode > Enter Maintenance
Mode.
3. In the Enter Maintenance Mode dialog, check Move powered-off and suspended virtual machines to other hosts in
the cluster.
4. Next to vSAN data migration, from the drop-down menu, select Full data migration and click GO-TO PRECHECK.
5. Verify that the test was successful and click ENTER MAINTENANCE MODE and click OK.
6. To monitor the VMware vSAN resyncing, click the cluster name and select Monitor > vSAN > Resyncing Objects.
Steps
1. To remove the host from the VxRail cluster, perform the following:
a. Select the cluster and click the Configure tab.
b. Select VxRail > Hosts.
c. Select the host and click REMOVE.
2. In the Remove Host from Cluster window, enter the VMware vCenter Server administrator and root account information.
3. After the account information is entered, click VERIFY CREDENTIALS .
4. When the validation is complete, click APPLY to create the Run Node Removal task.
5. After the precheck successfully completes, the host shuts down and is removed.
6. For L3 deployment: If you have removed all the nodes of a segment, select the unused port group on VMware VDS and click
Delete.
92
Manage VxRail cluster settings
Next steps
To
●
●
●
access the SSH, perform the following:
Log in to the VMware vCenter Server Management console as root.
From the left-menu, click Access.
From the Access Settings page, click EDIT and enable SSH.
If a DNS resolution issue occurs after you removed the node or you added the same removed node back into the cluster but
with a new IP address, on the VMware vCenter Server, to update dnsmasq, enter:
# service dnsmasq restart
Change the VxRail node IP address or hostname
Change the VxRail node IP address or hostname.
Steps
To change the IP address or hostname of the VxRail node, see Change the hostname and IP address for the VxRail
Manager VM chapter in the VxRail 8.0 How-To-Procedures guide.
Enable Enhanced Linked Mode for VMware vCenter
Server
You can move a VMware vCenter Server from one VMware vSphere domain to another VMware vSphere domain. Tagging and
licensing are retained and migrated to the new domain.
Prerequisites
● To avoid data loss, take a file-based backup of each node before the repointing process.
● Be familiar with the UNIX or LINUX commands, and the VMware vSphere management interface.
About this task
Repointing is supported only with the VMware vCenter Server 6.7 U1 and later. The following use cases are supported:
● Migrate a VMware vCenter Server from an existing domain to another existing domain with or without replication. The
migrated VMware vCenter Server moves from the existing single sign-on domain and joins the other existing domain as
another using the ELM.
● Migrate a VMware vCenter Server from an existing domain to a newly created domain (where the migrated VMware vCenter
Server is the first instance). See Repoint a VMware vCenter Server Node to a New Domain for more information. In this
case, there is no replication partner.
This procedure is not applicable to dynamic node cluster. This procedure applies to the VxRail cluster running the version 8.0.x
and later. The VxRail VMware vCenter Server with an external DNS manages the VxRail version 8.0.x or later cluster. See the
VxRail 8.0.x Support Matrix for a list of supported versions.
This procedure is intended for Dell Technologies customers, employees, and partners who are authorized to work on a VxRail
cluster.
The following table provides the command syntax for the domain repointing command:
Argument
Description
-m, --mode
The mode (precheck or execute) in which the argument runs the command.
-spa, --src-psc-admin
SSO administrator username for the source VMware vCenter Server. Do not append
the @domain.
-dpf, --dest-psc-fqdn
The FQDN of the VMware vCenter Server to repoint.
-dpa, --dest-psc-admin
SSO administrator username for the destination VMware vCenter Server. Do not
append @domain.
-ddn, --dest-domain-name
SSO domain name of the destination VMware vCenter Server.
Manage VxRail cluster settings
93
Argument
Description
-dpr, --dest-psc-rhttps
(Optional) HTTPS port for the destination VMware vCenter Server. If not set, the
default port is 443.
-dvf, --dest-vc-fqdn
The FQDN of the VMware vCenter Server pointing to a destination VMware vCenter
Server. The VMware vCenter Server is used to check for the component data
conflicts in the precheck mode. If not provided, conflict checks are skipped and
the default resolution (COPY) is applied for any conflicts that are found during the
import process.
This argument is optional only if the destination domain does not have a VMware
vCenter Server.
-sea, --src-emb-admin
Administrator for the VMware vCenter Server with embedded VMware vCenter
Server. Do not append @domain to the administrator ID.
-rpf, --replication-partnerfqdn
(OPTIONAL) The FQDN of the replication partner node to which the VMware
vCenter Server is replicated.
-rpr, --replication-partnerrhttps
(OPTIONAL) The HTTPS port for the replication node. If not set, the default port
is 443.
-rpa, --replication-partneradmin
(OPTIONAL) SSO administrator username of the replication partner VMware
vCenter Server.
-dvr, --dest-vc-rhttps
(OPTIONAL) The HTTPS port for the VMware vCenter Server pointing to the
destination VMware vCenter Server. If not set, the default port is 443.
--ignore-snapshot
(OPTIONAL) Ignore the snapshot warning.
--no-check-certs
(OPTIONAL) Ignore the certification validation.
(OPTIONAL) Retrieves the command execution detail.
-h, --help
(OPTIONAL) Displays the help message for the cmsso-util domain
repoint command.
Repoint a single VMware vCenter Server node to an existing
domain without a replication partner
You can repoint a single VMware vCenter Server from one VMware SSO domain to an existing VMware SSO domain without a
replication partner. Each VMware SSO domain contains a single VMware vCenter Server.
Prerequisites
Power on both VMware vCenter Server nodes (A and B) before beginning the repointing process.
Steps
1. Using SSH, log in to the VMware vCenter Server as root.
2. To access the VMware vCenter Server A of domain 1, enter:
ssh root@<vcenter_a_ip_address>
3. To perform the precheck from domain 1 to domain 2, enter:
cmsso-util domain-repoint -m pre-check --src-emb-admin administrator -replication-partner-fqdn <vcenter_a_ipaddress_domain2> --replication-partner-admin
PSC_Admin_of_destination_node --dest-domain-name destination_PSC_domain
Enter Source embedded vCenter Server Admin Password:
Enter Replication partner Platform Services Controller Admin Password:
The domain-report operation will export License, Tags, Authorization data
before repoint and import after repoint.
94
Manage VxRail cluster settings
WARNING: Global Permissions for the source vCenter Server system will be lost. The
administrator for the target domain must add global permissions manually.
Source domain users and groups will be lost after the Repoint operation.
User 'administrator@vsphere.local' will be assigned administrator role on the
source vCenter Server
The following license keys are being copied to the target Single Sign-On
domain. VMware recommends using each license key in only a single domain. See
"vCenter Server Domain Repoint License Considerations" in the vCenter Server
Installation and Setup documentation
MH2HL-2PH9N-08C70-19573
Repoint Node Information:
Source embedded vCenter Server:c3-vc.rackk01.local
Replication partner Platform Services Controller: c2-vc.rackk01.local
Thumbprint: 5C:04:EE:F2:E4:83:F0:D7:0D:AD:3A:F3:34:A5:D1:46:BE:E0:45:77
All Repoint configuration settings are correct: proceed? [Y|y|N|n]: y
Starting License pre-check
Starting Authz Data export
Starting Tagging Data export
Conflict data, if any, can be found under /storage/domain-data/Conflict*.json
Pre-checks successful
The precheck writes the conflicts to the /storage/domain-data directory.
4. OPTIONAL: Review conflicts and apply the same resolution for all the conflicts, or apply a separate resolution for each
conflict.
The conflict resolutions are:
● Copy: Creates a copy of the data in the target domain.
● Skip: Skips copying the data in the target domain.
● Merge: Merges the conflict without creating duplicates.
Back up each VxRail node (optional)
To ensure no loss of data, take a file-based backup of each node before repointing.
Steps
1. Log in to the VMware vCenter Server as root.
2. Click Backup.
The table under Activity displays the latest backup version from the VMware vCenter Server.
3. Click Backup Now.
4. OPTIONAL: Click Use backup location and username from backup schedule and perform the following:
a. Enter the backup location details.
b. OPTIONAL: Enter an encryption password if you want to encrypt your backup file.
To encrypt the backup data, use the encryption password.
c. OPTIONAL: Select Stats, Events, and Tasks to back up additional historical data from the database.
d. OPTIONAL: In the Description field, enter a description for the backup.
e. Click Start.
Repoint the VMware vCenter Server A of domain 1 to domain 2
Repoint the VMware vCenter Server A of domain 1 to domain 2.
Steps
To repoint the VMware vCenter Server A of domain 1 to domain 2, enter:
Manage VxRail cluster settings
95
cmsso-util domain-repoint -m execute --src-emb-admin Administrator --replication-partnerfqdn <vcenterb_fqdn_domain2> --replication-partner-admin PSC_Admin_of_destination_node -dest-domain-name destination_PSC_domain
Enter Source embedded vCenter Server Admin Password:
Enter Replication partner Platform Services Controller Admin Password:
The domain-report operation will export License, Tags, Authorization data
before repoint and import after repoint.
WARNING: Global Permissions for the source vCenter Server system will be lost. The
administrator for the target domain must add global permissions manually.
Source domain users and groups will be lost after the Repoint operation.
User 'administrator@vsphere.local' will be assigned administrator role on the
source vCenter Server system.
The default resolution node for Tags and Authorization conflicts us Copy,
unless overridden in the conflict files generated during pre-check.
Solutions and plugins registered with vCenter Server must be re-registered.
Before running the Repoint operation, you should backup all nodes. You can use
file based backups to restore in case of failure. By using the
Repoint tool
you agree to take the responsibility for creating backups. Otherwise
you should
cancel this operation.
The following license keys are being copied to the target Single Sign-On
domain. VMware recommends using each license key in only a single domain. See
"vCenter Server Domain Repoint License Considerations" in the vCenter Server
Installation and Setup documentation
MH2HL-2PH9N-08C70-0R80K-19573
Repoint Node Information:
Source embedded vCenter Server:c3-vc.rackk01.local
Replication partner Platform Services Controller: c3-vc.rackk01.local
Thumbprint: B7:C0:FF:9D:C8:A1:64:AB:1B:24:8C:1C:AB:4D:86:62:1D:E6:A5:64
All Repoint configuration settings are correct: proceed? [Y|y|N|n]: y
Starting License export
Export Service Data
Uninstalling Platform Controller Services
Stopping all services
Updating registry settings
Re-installing Platform Controller Services
Registering Infra Services
Starting License import
Starting Authz Data import
Starting Tagging Data import
Starting CLS import
Starting WCP service import phase...
Starting NSXD import
Applying target domain CEIP participation preference
Starting all services
Repoint successful.
96
Manage VxRail cluster settings
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
Done
Done
Done
Done
Done
Done
Done
Done
Done
Done
Done
Done
Done
Done
Done
Done
Update the VMware vCenter Server SSL certificates from VMware
vCenter Server B
To update SSL certificates, generate the following procedure in SolVe: Import VMware vSphere SSL certificates to VxRail
Manager.
Refresh the node certificates in the VMware vCenter Server A
Refresh the node CA certificates in the VMware vCenter Server A.
Steps
1. Log in to the VMware vCenter Server as root.
2. Select Host > Configure > System > Certificate.
3. Click REFRESH CA CERTIFICATES and wait for the task to complete.
4. Repeat these steps on all the nodes in the VMware vCenter Server A.
Repoint the VMware vCenter Server node to a new domain
Repoint the VMware vCenter Server from an existing domain to a newly created domain.
Steps
1. Shut down the node (VMware vCenter Server A) that is repointed to domain 1 (moved to a different domain).
2. Decommission the VMware vCenter Server node that is repointed.
For example, to decommission the VMware vCenter Server A, log in to the VMware vCenter Server B (on the original
domain) and enter:
ssh root@<vcenter_ip_address>
cmsso-util unregister --node-pnid <vcentera_fqdn> --username
VC_B_sso_administrator@sso_domain.com --passwd
VC_B_sso_adminuser_password
Solution users, computer account and service endpoints will be unregistered
Manage VxRail cluster settings
97
2021-01-29T03:15:10.144Z Running command: ['/usr/lib/vmware-vmafd/bin/dir-cli',
'service',
'list, '--login', 'administrator@vsphere.local']
2021-01-29T03:15.10.167Z Done running command
Stopping all the services ...
All services stopped.
Starting all the services ...
Started all the services.
Success
3. Power on the VMware vCenter Server A.
4. Optionally, to prevent data loss, take a file-based backup of each node before repointing the VMware vCenter Server.
a. Log in to the VMware vCenter Server management interface as root.
b. Click Backup.
The table under Activity displays the latest backup version that is taken of the VMware vCenter Server.
c. Click Backup Now to open the wizard.
d. OPTIONAL: Click Use backup location and username from backup schedule to use the information from a scheduled
backup.
● Enter the backup location details.
● OPTIONAL: Enter an encryption password if you want to encrypt your backup file.
To encrypt the backup data, you must use the encryption password.
● OPTIONAL: Select Stats, Events, and Tasks to back up additional historical data from the database.
● OPTIONAL: In the Description field, enter a description for the backup.
● Click Start.
5. To repoint the VMware vCenter Server A to new domain 2, enter:
cmsso-util domain-repoint -m execute --src-emb-admin administrator --dest-domain-name
destination_PSC_domain
Enter Source embedded vCenter Server Admin Passowrd:
The domain-repoint operation will export License, Tags, Authorization data before
repoint and import after repoint.
WARNING: Global Permissions for the source vCenter Server system will be lost.
6. Update the VMware vCenter Server A SSL certificates from its VMware vCenter Server.
Generate Import VMware vSphere SSL certificates to VxRail Manager to update certificates.
7. Generate Refresh node certificates in VMware vCenter Server A to refresh node certificates.
For VMware documentation, see https://docs.vmware.com/.
Enable large cache tier capacity before VxRail cluster
initialization
For each VMware ESXi node, enable large cache tier capacity to improve the performance of the VxRail VMware vSAN before
the initial VxRail deployment.
About this task
This large cache tier capacity (write buffer) enhancement only applies to the Original Storage Architecture (OSA) in all-flash
configurations. Hybrid VMware vSAN clusters using the OSA are limited to a 600 GB write buffer.
This procedure applies to the VxRail clusters running the VxRail version 8.0.x and later.
This procedure is intended for Dell Technologies employees and partners who are authorized to work on a VxRail cluster. This
procedure is not available to customers.
98
Manage VxRail cluster settings
Steps
1. Using DCUI, log in to each unconfigured VMware ESXi node as root.
2. To view the current large write buffer setting, enter:
esxcfg-advcfg -g /LSOM/enableLargeWb
Value of enableLargeWb is 0
3. To enable large write buffer, enter:
esxcfg-advcfg -s 1 /LSOM/enableLargeWb
Value of enableLargeWb is 1
4. Repeat these steps on each node and complete the VxRail initialization.
Enable large cache tier capacity for an existing VxRail
cluster
For each VMware ESXi node, enable large cache tier capacity to improve the performance of an existing VxRail VMware vSAN
cluster.
About this task
This large cache tier capacity (write buffer) enhancement only applies to the Original Storage Architecture (OSA) in all-flash
configurations. Hybrid vSAN clusters using the OSA are limited to a 600 GB write buffer.
This procedure applies to the VxRail cluster running VxRail version 8.0.x and later.
This procedure is intended for Dell Technologies employees and partners who are authorized to work on a VxRail cluster. This
procedure is not available to customers.
Steps
1. Using DCUI, log in to each unconfigured VMware ESXi node as root.
2. To view the current large write buffer setting, enter:
esxcfg-advcfg -g /LSOM/enableLargeWb
Value of enableLargeWb is 0
3. To set the large write buffer, enter:
esxcfg-advcfg -s 1 /LSOM/enableLargeWb
Value of enableLargeWb is 1
4. Repeat these steps to set the large write buffer on each node.
5. To activate the changes for the large write buffer, perform the following steps to recreate the disk groups:
a. Place the node into maintenance mode.
b. Go to Recreate a Disk Group to recreate the disk groups. Select the full data migration mode.
6. After you recreate the disk groups, verify that the large write buffer size is set for all the disk groups by entering:
esxcli vsan storage list | grep "VSAN Disk Group UUID" | awk -F ': ' '{print $2}'| uniq
| xargs -I {} vsish -e get /vmkModules/lsom/disks/{}/info | grep "Write Buffer Size" |awk
-F':' '{print $1":"$2/1024/1024/1024"GB"}'
Before enabling the large write buffer, the maximum disk group write buffer size is 600 GB. After enabling the large write
buffer, the disk group write buffer size can be up to 1600 GB.
7. Repeat Steps 5 and 6 for each node.
Manage VxRail cluster settings
99
Remediate the CPU core count after node addition or
replacement
Get the cluster CPU drifts to determine if you need to perform this procedure.
Prerequisites
● Verify that you have a PowerEdge 15G or higher model with Intel CPU configuration.
● Enable the cluster DRS.
● Obtain the API guide.
About this task
This procedure is intended for Dell Technologies customers, employees, and partners who are authorized to work on a VxRail
cluster.
This is applicable for VxRail 7.0.400 or VxRail 8.0 and later. See VxRail 7.x Support Matrix or VxRail 8.x Support Matrix for a list
of supported versions.
During the cluster bring-up, administrator@vsphere.local and the password are configured.
Steps
1. To get the cluster CPU drifts, use the GET method to invoke the REST API:
curl -k -XGET -u <username>:<password>https://localhost/rest/vxm/private/v1/cluster/
i2e_config
2. If the driftConfiguration in the API response is empty, do not perform this procedure.
If driftConfiguration is not empty, view the CPU core count under desiredConfiguration:
100
Manage VxRail cluster settings
Figure 1.
3. If the driftConfiguration is not empty, continue to Update the cluster status.
Update the cluster status
If cluster CPU drifts are populated, update the cluster status.
Prerequisites
● Verify that you have a PowerEdge 15G or higher model with Intel CPU configuration.
● Enable the cluster DRS.
Manage VxRail cluster settings
101
Steps
1. On the VMware vSphere Web Client, log in as administrator.
2. Select the VxRail cluster, and then click theConfigure tab. Under Services, click vSphere DRS.
3. On the right-menu, view the VMware vSphere DRS configuration. If cluster DRS is off or the Automation Level is not fully
automated, click EDIT.
a. In Edit Cluster Settings, enable vSphere DRS. For the Automation Level, use the drop-down menu to select Fully
Automated.
b. Click OK to save the settings.
4. Check the mode status for each node. If a node is in Maintenance Mode, select the node and right-click Maintenance
Mode > Exit Maintenance Mode
102
Manage VxRail cluster settings
Trigger a rolling update
Prerequisites
● Verify that you have a PowerEdge 15G or higher model with Intel CPU configuration.
● Enable the cluster DRS.
● See the VxRail API guide.
Steps
1. Verify that the desired CPU core count is set to enable core count value to prepare a rolling update API request body.
{
"desiredConfiguration": {
"cpu": {
"enabledCores": <enable core count>
}
}
}
2. To trigger rolling update process and invoke the REST API:
curl -XPATCH -u <username>:<password> https://<vxm-ip>/rest/<vxm>/private/v1/cluster/
i2e_config --data-raw '{
"desiredConfiguration": {
"cpu": {
"enabledCores": <enable core count>
You will get a request ID.
3. To check the task running status and invoke the REST API GET, enter:
curl -k -XGET -u <username>:<password> https://127.0.0.1/rest/<vxm>/v1/requests/
<request_id>
Submit install base updates for VxRail
This section provides information about how to install base updates for VxRail for Dell partners and employees.
About this task
Detailed information about product registration, move or party changes, and other install base maintenance updates is available
for Dell partners. For more information, see Product Registration and Install Base Maintenance Job Aid. You can also view the
video tutorial for the partner product registration process Dell Partner Product Registration Process and Deployment Operations
Guide.
Manage VxRail cluster settings
103
For Dell Technologies employees, see KB 197636 for information that is related to installation of install base updates for VxRail.
This procedure is intended for Dell Technologies employees and partners who are authorized to work on a VxRail cluster.
This is applicable for VxRail versions 7.0.x or 8.0 and later. See VxRail 7.x Support Matrix or VxRail 8.x Support Matrix for a list
of supported versions.
View CloudIQ information in VxRail
The CloudIQ web portal provides cloud-based multicluster management and analytics of your VxRail.
Prerequisites
Bring up the VxRail Cluster and verify that there are no critical alarms and that VMware vSAN is healthy.
About this task
This procedure applies to VxRail versions 7.0.410 and 8.0.020 and later.
See the VxRail 7.x Support Matrix or the VxRail 8.x Support Matrix for list of the supported versions.
This procedure is intended for Dell Technologies customers, employees, and partners who are authorized to work on a VxRail
cluster.
Steps
1. Open the VMware vSphere Web Client and select the VxRail cluster.
2. Select Configure > VxRail > Support.
3. Under VxRail HCI System Software SAAS multi-cluster management, a description of CloudIQ is displayed with a link
to a demo. Additionally, you can click the question icon for more information.
104
Manage VxRail cluster settings
6
Manage witness settings
This procedure is intended for Dell Technologies customers, employees, and partners who are authorized to work on a VxRail
cluster.
Change the hostname and IP address of the witness
sled
This procedure targets the VMware ESXi host on the VxRail-supplied witness sled. The witness sled is hardware. This procedure
does not change network settings on the witness VM. A shutdown of the witness VM is required to make the update.
About this task
This procedure applies to VxRail version 7.0.420 and later cluster.
This procedure is intended for Dell Technologies customers, employees, and partners who are authorized to work on a VxRail
cluster.
Change the IP address of the VxRail-managed witness sled
Modify the IP address of the VMware ESXi host on the VxRail-managed witness sled.
Prerequisites
Before you change the IP address of the witness sled, perform the following:
● Do not update the witness sled DNS entry with the new IP address until instructed to in the steps.
● Verify the health status of the sled to avoid running in a degraded state.
● Verify that the DNS mapping is correct.
● Verify that the health monitoring status is disabled.
About this task
DNS must be configured properly or this task may not work.
Steps
1. To shut down the witness VM, perform the following:
a. Log in to the VMware vSphere Web Client as an administrator.
b. From the VxRail cluster left-menu, select VxRail data center and click Witness Folder Cluster. Select the witness sled
and click VMware vSAN Witness Appliance.
c. Click the shutdown icon at the top-right corner of the screen and click YES to confirm.
2. To remove the witness sled from the VMware vCenter Server, perform the following:
a. Right-click the sled and select Maintenance Mode > Enter Maintenance Mode.
b. Right-click the sled again and select Remove from Inventory.
3. To change the IP address for the witness sled, perform the following:
a. Log in to the VMware ESXi of the witness sled through the management IP address.
b. From the Networking left-menu, under the VMkernel NICs tab, click vmk2.
Manage witness settings
105
c. Click Edit Settings.
d. When the wizard opens, configure the new IP address and click Save.
NOTE: The new management IP address disconnects immediately when you click Save. To reconnect, use the
updated IP address or change it using the ESXi shell command line using the iDRAC remote console.
4. Determine how the DNS is managed before you update your DNS server with new DNS mapping and perform one of the
following:
● If the DNS server is customer managed, add a DNS server entry where the new witness sled IP address is mapped to the
original witness sled FQDN. Delete the old entry of the witness sled. Continue to step 5.
● If the DNS server is VxRail managed, use SSH to log in to the VxRail Manager as mystic and su to root.
a. Use an editor to open the /etc/hosts file.
127.0.0.1
20.12.91.200
20.12.91.201
20.12.91.101
20.12.91.102
20.12.91.202
entry
localhost localhost.localdom
c1-vxm.rackE11.local c1-vxm
c1-vc.rackE11.local c1-vc
c1-esx01.rackE11.local c1-esx01
c1-esx02.rackE11.local c1-esx02
c1-psc.rackE11.local c1-psc--- Orignial witness sled IP address in the DNS
b. To add a DNS entry where the new witness sled IP is mapped with the original witness sled FQDN, enter:
106
Manage witness settings
<new_sled_ipaddr><original_sled_fqdn><original_sled_host>
127.0.0.1
localhost localhost.localdom
20.12.91.200
c1-vxm.rackE11.local c1-vxm
20.12.91.201
c1-vc.rackE11.local c1-vc
20.12.91.101
c1-esx01.rackE11.local c1-esx01
20.12.91.102
c1-esx02.rackE11.local c1-esx02
20.12.91.202
c1-psc-witness-new.rackE11.local c1-psc-witness-new
20.12.91.203
c1-psc-witness-new.rackE11.local c1-psc-witness-new--- Map the old
FQDN to the new witness sled IP address
c. To delete the old DNS entry for the witness sled in the DNS server. Save changes and quit.
127.0.0.1
20.12.91.200
20.12.91.201
20.12.91.101
20.12.91.102
20.12.91.202
DNS entry
20.12.91.203
localhost localhost.localdom
c1-vxm.rackE11.local c1-vxm
c1-vc.rackE11.local c1-vc
c1-esx01.rackE11.local c1-esx01
c1-esx02.rackE11.local c1-esx02
c1-psc-witness-new.rackE11.local c1-psc-witness-new--- Delete this old
c1-psc-witness-new.rackE11.local c1-psc-witness-new
d. To restart the DNS service, enter:
systemctl restart dnsmasq
e. To verify the FQDN mapping to the new witness sled IP address, enter:
dig <witness_sled_fqdn> +short
NOTE: You can also use the nslookup command.
5. To clear the DNS cache on the VMware vCenter Server, perform the following:
a. Using SSH, log in to the VxRail vCenter Server as root.
b. To restart the DNS service, enter:
systemctl restart dnsmasq
c. To verify the FQDN mapping to the new witness sled IP address, enter:
dig <witness_sled_fqdn> +short
NOTE: You can also use the nslookup command.
6. To add the witness sled to the VMware vCenter Server, perform the following:
a. Log in to the VMware vSphere Web Client as an administrator.
b. Right-click the Witness Folder cluster and select Add Host....
c. To add the witness sled, use the witness sled FQDN.
Manage witness settings
107
d. Follow the steps in the wizard to add the witness sled.
e. From the Witness Folder cluster, right-click the cluster and select Maintenance Mode > Exit Maintenance Mode.
f. From the Witness Folder cluster, select the witness sled and verify that the IP address is changed to the new IP
address.
7. To power on the witness VM, perform the following:
a. Log in to the VMware vSphere Web Client as an adminstrator.
b. From the VxRail cluster left-menu, select the VxRail data center and click Witness Folder Cluster. Select the witness
sled and click VMware vSAN Witness Appliance.
c. Under Summary tab, click the Power on icon on the top-right corner of the screen and wait for the witness VM to
power on.
8. To change the witness sled platform service binding IP address, perform the following:
a. Log in to the VMware ESXi host client of the witness sled using the management IP address.
b. From the Manage left-menu, select the Services tab.
c. Click the Start icon to turn on the SSH service on the witness sled.
d. Using SSH, log in to the witness sled.
e. To edit the platform configuration file and change the IP address to the new witness sled IP address, enter:
108
Manage witness settings
vi /etc/config/vxrail/platform.conf
[general]
log_level = INFO
log_target = syslog
listener_type = unix
listener_address = /tmp/platform-services.sock
[backend]
max_workers = 12
[restservice]
bind = 20.12.91.202---Original witness sled IP address in the platform.conf file
[general]
log_level = INFO
log_target = syslog
listener_type = unix
listener_address = /tmp/platform-services.sock
[backend]
max_workers = 12
[restservice]
bind = 20.12.91.203---New witness sled IP address
f. To restart the platform service, enter:
/etc/init.d/vxrail-pservice restart
Platform Service successfully stopped.
Check hostd status.
hostd is ready.
Platform Service started.
9. To verify the health status, perform the following:
a. Log in to the VMware vSphere Web Client as an administrator.
b. Select a cluster and select Monitor > vSAN > Skyline Health. Verify that the VxRail cluster is in a healthy state.
Change the hostname of the witness sled
Modify your DNS server with new mapping for the witness sled.
Prerequisites
Before you change the hostname of the witness sled, perform the following:
● Verify that the DNS mapping is correct.
● Verify that the VxRail cluster is in a healthy state.
● Verify that the health monitoring status is disabled.
About this task
You can add a server to the witness mode for the customer-managed DNS server or the VxRail-managed DNS server. The
procedure depends on how the DNS server is managed. Verify that DNS has been configured properly or this task may not
work.
Steps
1. Determine how the DNS is managed before you add an entry for the witness sled and perform one of the following:
● If the DNS server is customer managed, add a DNS server entry where the new FQDN is mapped to the original witness
sled IP address. Continue to step 2.
● If the DNS server is VxRail managed, use SSH to log in to the VxRail Manager as mystic and su to root.
Manage witness settings
109
a. Use an editor to open the /etc/hosts file.
127.0.0.1
20.12.91.200
20.12.91.201
20.12.91.101
20.12.91.102
20.12.91.202
localhost localhost.localdom
c1-vxm.rackE11.local c1-vxm
c1-vc.rackE11.local c1-vc
c1-esx01.rackE11.local c1-esx01
c1-esx02.rackE11.local c1-esx02
c1-psc.rackE11.local c1-psc--- Orignial DNS entry of the witness sled
b. To add a DNS entry where the new FQDN is mapped with the original witness sled IP address, for the witness sled,
enter:
<sled_ipaddr><new_sled_fqdn><new_sled_host>
For example: 172.16.10.105 witness-sled-new.vv009.local witness-sled-new
127.0.0.1
20.12.91.200
20.12.91.201
20.12.91.101
20.12.91.102
20.12.91.202
20.12.91.202
localhost localhost.localdom
c1-vxm.rackE11.local c1-vxm
c1-vc.rackE11.local c1-vc
c1-esx01.rackE11.local c1-esx01
c1-esx02.rackE11.local c1-esx02
c1-psc.rackE11.local c1-psc
c1-psc-witness-new.rackE11.local c1-psc-witness-new
c. To restart the DNS service, enter: systemctl restart dnsmasq
d. To verify the new DNS entry, enter:
dig <new_sled_fqdn> +short
NOTE: You can also use the nslookup command.
2. To shut down the witness VM, perform the following:
a. Log in to the VMware vSphere Web Client as an administrator.
b. From the VxRail cluster left-menu, select VxRail data center and select a witness sled. Click VMware vSAN Witness
Appliance.
c. Click the Shutdown icon at the top-right corner of the screen. Click YES to confirm.
110
Manage witness settings
3. To remove the witness VMware ESXi host from the VMware vCenter Server, perform the following:
a. Right-click the witness sled and select Maintenance Mode > Enter Maintenance Mode .
b. Right-click the witness sled again and select Remove from Inventory.
4. To change the hostname for the witness sled, perform the following:
a. Log in to the VMware ESXi host client of the witness sled through the management IP address.
b. From the Networking left-menu, under the TCP/IP stacks tab, click Default TCP/IP stack.
c. Click Edit Settings.
d. When the wizard opens, enter the new Host name and click Save.
Manage witness settings
111
5. To add the witness sled to the VMware vCenter Server, perform the following:
a. Log in to the VMware vSphere Web Client as an administrator.
b. Right-click the Witness Folder cluster and select Add Host....
c. Use the new witness sled FQDN to add the witness sled.
d. Follow the steps in the wizard to add the witness sled.
e. From the Witness Folder cluster, right-click the cluster and select Maintenance Mode > Exit Maintenance Mode.
f. From the Witness Folder cluster, select the witness sled and verify that the FQDN is new.
6. To power on the witness VM, perform the following:
a. From the VxRail cluster left-menu, select VxRail data center and select a witness sled. Click VMware vSAN Witness
Appliance.
b. Under Summary tab, click the Power on icon on the top-right corner of the screen and wait for the witness VM to
power on.
7. Determine how the DNS is managed before you remove an entry and perform one of the following:
112
Manage witness settings
● If the DNS server is customer managed, delete the old DNS entry where the old FQDN is mapped to the witness sled IP
address. Go to step 8.
● If the DNS server is VxRail managed, use SSH to log in to the VxRail Manager as mystic and su to root.
a. Use an editor to open the /etc/hosts file and delete the old DNS entry that is mapped to the old FQDN to the
witness sled IP address. Save the changes and quit.
127.0.0.1
localhost localhost.localdom
20.12.91.200
c1-vxm.rackE11.local c1-vxm
20.12.91.201
c1-vc.rackE11.local c1-vc
20.12.91.101
c1-esx01.rackE11.local c1-esx01
20.12.91.102
c1-esx02.rackE11.local c1-esx02
20.12.91.202 c1-psc.rackE11.local c1-psc--- Delete this original DNS entry
20.12.91.202
c1-psc-witness-new.rackE11.local c1-psc-witness-new
b. To restart the DNS service, enter: systemctl restart dnsmasq
8. To verify the health status, perform the following:
a. Log in to the VMware vSphere Web Client as an administrator.
b. Select a cluster and select Monitor > vSAN > Skyline Health to verify that the VxRail cluster is in a healthy state.
Change the IP address of the VxRail-managed
Witness VM
You can change the IP address of the VxRail-managed Witness VM. There is no procedure to change the customer-managed
Witness VM. The Witness VM is deployed at Day 1 and imported into the Witness folder. The Witness VM is added as a host.
Prerequisites
Verify that the DNS name is localhost.localdomain. If the DNS name does not match, you cannot change the IP address
of the Witness VM.
To view the DNS name, in the VMware vSphere Web Client, select the Witness VM and click Summary.
About this task
This procedure applies to VxRail 7.0.420 and later for vSAN 2-node clusters or stretched cluster with VxRail-managed Witness
VM on VD-4000W.
Under the VxRail data center, the Witness folder cluster contains the Witness ESXi host and the Witness VM. Deployed on the
Witness ESXi host is the Witness VM.
To change the IP address of the Witness VM, you must perform the following:
● Disable the stretched cluster.
● Remove the Witness VM.
NOTE: The VxRail-managed Witness VM is also known as the mapping host. The VMware ESXi operating system is
running on this VM. When the VxRail-managed Witness VM is added to the witness folder, it is displayed as a VMware
ESXI host. If the VxRail-managed Witness VM IP address is changed, the VMware ESXi host IP address is also changed.
The VMware ESXi host IP address must be removed and added back using the new IP address.
● Modify the IP address and restart the network.
● Add the Witness VM as a host with the new IP address.
● Update the VxRail Manager database.
● Configure the stretched cluster.
This procedure is intended for Dell Technologies customers, employees, and partners who are authorized to work on a VxRail
cluster.
Steps
1. Log in to the VMware vSphere Web Client as an administrator and select the Inventory icon.
2. To verify the health status, select a cluster and select the Monitor tab. Select vSAN > Skyline Health.
3. To disable the stretched cluster, perform the following:
Manage witness settings
113
a. Select the VxRail cluster and click the Configure tab.
b. Select vSAN > Fault Domains.
c. From Fault Domains window, click DISABLE STRETCHED CLUSTER and click REMOVE.
4. To remove the Witness VM mapping host, perform the following:
a. Right-click the VMware vSAN Witness Appliance and select Maintenance Mode > Enter Maintenance Mode.
b. Right-click the VMware vSAN Witness Appliance again and select Remove from Inventory.
5. To modify the IP address and restart the network, perform the following:
a.
b.
c.
d.
e.
Select the VxRail data center and select the Witness Folder > VMware vSAN Witness Appliance.
On the Summary tab, click LAUNCH WEB CONSOLE.
On the console, press F2 and log in to the Witness VM on the data center UI that is set on Day1.
Under System Customization, select Configure Management Network.
Enter IPv4 Configuration information and press Y to confirm the changes.
6. To add the Witness VM as a host with the new IP address, perform the following:
NOTE: There is no procedure to change the management IP of physical node where customer-managed Witness VM is
running.
a. Right-click the Witness Folder cluster and select Add Host....
b. In the Add Host wizard, select Name and location and enter the new IP address of the host to add to the VMware
vCenter Server. Click NEXT.
114
Manage witness settings
c. Accept the default entries for Connection settings, Host summary, Assign License, and Lockdown mode and click
NEXT.
d. For VM location wizard, select the folder location and click NEXT.
e. From the Witness Folder cluster, right-click the witness host and select Maintenance Mode > Exit Maintenance
Mode.
7. To update the VxRail Manager database, perform the following:
a. Use SSH to log in to the VxRail Manager as mystic and su to root.
b. Connect to the database and enter:
psql -U postgres vxrail
c. To query the witness sled IP address, enter:
select * from configuration.configuration where key = 'witness_vm_host';
id | category |
key
|
value
----------------------------------------------89 | setting
| witness_vm_host | 20.12.91.109--- old witness sled VM IP address
(1 row)
d. To update the witness VM IP address, enter:
update configuration.configuration set value = '<new_IP>'
where key = 'witness_vm_host';
select * from configuration.configuration where key = 'witness_vm_host';
id | category |
key
|
value
----------------------------------------------89 | setting
| witness_vm_host | 20.12.91.112--- new witness sled VM IP address
(1 row)
e. To exit the database, enter: \q.
8. To configure the stretched cluster, perform the following:
a.
b.
c.
d.
Log in to the VMware vSphere Web Client as an administrator.
Select the VxRail cluster and click the Configure tab.
Select vSAN > Fault Domains.
From the Fault Domains wizard, click CONFIGURE STRETCHED CLUSTER.
Manage witness settings
115
e. Follow the wizard steps and click NEXT twice. Click FINISH.
9. To verify the IP address change, perform the following:
a. Log in to the Witness VM with the new IP address from the web console.
b. To verify the Witness VM IP address configuration, enter:
esxcli network ip interface ipv4 get
Name
---vmk0
vmk1
IPv4 Address IPv4 Netmask IPv4 Broadcast Address Type Gateway
DHCP DNS
------------ ------------ -------------- ------------ ----------- -------20.12.91.112
255.255.255.0 20.12.91.255
STATIC
20.12.91.1 false
192.168.101.33 255.255.255.0 192.168.101.255 STATIC
20.12.91.1 false
c. Log in to the VMware vCenter Server MOB as an administrator.
d. Under Properties, click content.
"
e. Select the rootFolder: datacenter and examine through the following values: ipAddress.
● childEntity: datacenter-3
● hostFolder: group-h5
● childEntity: group-h12 (Witness Folder cluster)
● childEntity: domain-s62 (Witness VM)
116
Manage witness settings
● Host: host-64
● VM: vm-67
● guest: guest
● ipAddress: <new_IP_address>
Verify that the IP address is new.
10. To verify the health status, perform the following:
a. Log in to the VMware vSphere Web Client as an administrator.
b. Select a cluster and select the Monitor tab. Click vSAN > Skyline Health to verify that the VxRail cluster is in a healthy
state.
Collect the VxRail-supplied witness configuration
Collect the witness configuration details from the VxRail configuration file. The VxRail configuration file is contained within the
configuration report that is stored on the VMware vSphere Web Client.
About this task
This procedure is intended for Dell Technologies customers, employees, and partners who are authorized to work on a VxRail
cluster.
Steps
1. To download the VxRail configuration .xml file from the current configuration report, perform the following:
a.
b.
c.
d.
e.
f.
Log in to the VMware vSphere Web Client as an administrator.
Select the Inventory icon.
Select the VxRail-Virtual-SAN-Cluster.
Click Configure tab.
Select VxRail > System.
Click DOWNLOAD to download the VxRail configuration .xml file and save the file to your local repository.
2. To collect the witness configuration from the VxRail configuration .xml, perform the following:
a. Open the VxRail Config XML File.
Manage witness settings
117
b. Search for WitnessNode in the configuration report and collect the details as shown from the following XML file:
Separate witness traffic on an existing stretched
cluster
Deploy VxRail stretched clusters to provide an alternate VMkernel interface designated to handle traffic that is aimed at the
witness instead of the vSAN tagged VMkernel interface. This feature allows for more flexible network configurations by allowing
separate networks for node-to-node and node-to-witness traffic.
Prerequisites
● Select a dedicated VLAN or subnet to be provisioned at each data site for witness traffic. The VLAN at each data site should
be different. For example, VLAN 19 at Site-1 on subnet 172.18.19.0/24 and VLAN 20 at Site-2 on subnet 172.18.20.0/24.
● For both sites, create the VLAN on the ToR switches and add to the trunk ports going to the nodes.
● Create gateways for each witness traffic VLAN and verify that there is network connectivity between the witness subnet at
each data site and the witness site.
● Verify that static routes are set on existing stretched cluster deployments.
About this task
Upgrade the stretched cluster firmware earlier than VMware vSAN 6.7 and later versions. You can configure a different Witness
traffic network than the vSAN traffic network on existing stretched cluster deployments.
This procedure is intended for Dell Technologies customers, employees, and partners who are authorized to work on a VxRail
cluster.
118
Manage witness settings
Steps
1. To create a port group on each data site, perform the following:
a. Log in to the VMware vCenter Web Client.
b. From the main menu, click the Networking icon.
c. Right-click the VMware VDS and select Distributed Port Group > New Distributed Port Group.
d. In the New Distributed Port Group wizard, enter the name for the port group. Click NEXT.
e. In Configure settings, enter or select the following:
● From the VLAN type drop-down menu, select VLAN.
● Enter the VLAN ID.
● Select Customize default policies configuration and click NEXT.
Manage witness settings
119
f. In the Teaming and Failover window, modify the Failover order of the uplinks to match the existing failover order of
the management traffic. Click NEXT.
g. For the remaining steps, accept the default settings by clicking NEXT.
h. In Ready to Complete, review the selections and click FINISH.
i. Repeat these steps for the second data site.
2. To create VMkernel interfaces on data nodes for the WTS network at each data site, perform the following:
a.
b.
c.
d.
Select Networking.
Right-click the Port Group you created earlier. For example, Site1_WTS_PG or Site2_WTS_PG.
Select Add VMkernelAdapters.
Click + Attached Hosts and select the specific hosts to be used for the data site. Click OK and then click NEXT.
e. Leave the default settings on Configure VMkernel adapter and click NEXT.
120
Manage witness settings
f. Enter the IP address and the subnet mask of the WTS network and click NEXT.
g. In Ready to Complete, review your selections and click FINISH.
h. Repeat these steps on the second data site.
3. To enable witness service on each node, perform the following:
a.
b.
c.
d.
e.
Select the node.
Go to the Configure > Networking > VMkernel adapters view to determine the VMkernel interface for witness traffic.
Enable SSH for the node.
Use the SSH client to log in as root to the node.
To set the traffic type to witness, enter:
esxcli vsan network ip add -i vmk<#> -T=witness
To verify the traffic type, enter:
esxcli vsan network list
f. Repeat these steps for each node in the cluster.
4. To remove the witness host disk group, perform the following:
a. Select the VxRail cluster and click Configure > vSAN > Disk Management.
b. In the Disk Group window, locate the witness host and select its disk group. Click Delete.
c. Click DELETE from the Remove Disk Group window.
Manage witness settings
121
5. To disable the stretched cluster, perform the following:
a.
b.
c.
d.
Log in to the vCenter vSphere Web Client.
Select the VxRail cluster and click Configure > vSAN > Fault Domains.
In the Stretched Cluster window, click Disable.
From the Remove Witness Host window, click REMOVE.
6. When using Layer 3 switching for the witnessPg port group, you must add static routes to the witness and the ESXi hosts
to communicate. When witness traffic is separated, you must reset the static routes on each node and the witness host.
In the following examples, the existing subnet on both data sites is 172.18.23.0/24 and the existing subnet on the witness
host is 172.18.25.0/24. Modify the subnet on Data Site-1 to 172.18.19.0/24 and on Data Site-2 to 172.18.20.0/24. The subnet
on the Witness host, which is 172.18.25.0/24, does not change.
a. Enable SSH on the node.
b. Determine the existing static route on the node for the vSAN network (vmk3), enter:
esxcli network ip route ipv4list
c. To remove the existing static route on the node, enter:
esxcli network ip route ipv4 remove -n <witness_vsan_subnet>/24 -g
<local_vsan_gateway>
d. To add a static route on the node for the witness traffic network (vmk5), depending on which site the node is associated
with, enter:
● For Site-1, enter:
esxcli network ip route ipv4 add -n <Witness VSAN subnet>/24 -g <Site-1 Witness
traffic subnet gateway>
● For Site-2, enter:
esxcli network ip route ipv4 add -n <Witness VSAN subnet>/24 -g <Site-2 Witness
traffic subnet gateway>
122
Manage witness settings
Repeat this task for each node in the cluster.
e. Enable SSH on the witness ESXi host.
f. To determine the existing static route on the witness ESXi host for the vSAN network (vmk1), enter:
esxcli network ip route ipv4list
g. To remove the existing static route on the Witness ESXi host for the vSAN network, enter:
esxcli network ip route ipv4 remove -n <data hostsVSAN subnet>/24 -g <Local existing
VSAN gateway>
h. To add static route on the witness ESXi host for the witness traffic network for Site-1, enter:
esxcli network ip route ipv4 add -n <Site-1 Witness traffic subnet>/24 - g <Site-1
Witness traffic gateway>
i.
To add a static route for Site-2:
esxcli network ip route ipv4 add -n <Site-2 Witness traffic subnet/24> - g <Site-2
Witness traffic gateway>
To validate the network setup, send a ping in both directions:
● From any data node on Site-1 and Site-2, enter: vmkping -l vmk5 <VSAN IP address of the Witness>
● From the witness host, pint the witness traffic IP address of any data node on Site-1 and Site-2, enter: vmkping -l
vmk1 <Site-1or 2 witnesstraffic IP of the NODE>
7. To reconfigure the stretched cluster, perform the following:
a. Log in to the vCenter vSphere Web Client.
b. Select the VxRail cluster and click Configure > vSAN > Fault Domains
c. In the Stretched Cluster window, click the CONFIGURE and follow the instructions in the wizard.
Manage witness settings
123
7
Collect log bundles
You can collect logs using the full log bundle method or light log bundle method. The full log collection method is time
consuming, and the light log bundle method contains only the VxRail Manager logs. To accelerate the diagnostic process, the
component and node selection of the log bundle is part of the VxRail Manager log bundle collection.
About this task
This feature provides a new REST API to obtain the logs. The following apply for collecting the log bundle:
●
●
●
●
●
Node selection is only supported with VMware ESXi, iDRAC, and PTAgent log collection.
iDRAC log collection and PTAgent log collection are supported on Dell 14G and later platforms.
Witness log collection does not support dynamic node clusters.
Platform log collection is only supported on a Dell platform.
VMware vCenter Server log bundle collection is not supported in T1 network configurations. You can collect the VMware
vCenter Server log bundle directly from the VMware vCenter Server.
You can generate the VxRail Manager, VMware vCenter Server, VMware ESXi, iDRAC, and PTAgent log bundle with the node
specification.
This procedure applies to the VxRail cluster running the VxRail version 4.5.3xx, 4.7.xx, 7.0.3xx or later, and 8.0.x and later. The
VxRail-managed VMware vCenter Server or customer-managed VMware vCenter Server manages the VxRail version 4.5.3xx,
4.7.xx, 7.0.3xx or later, and 8.0.x and later clusters.
This procedure is intended for Dell Technologies customers, employees, and partners who are authorized to work on a VxRail
cluster.
Collect the VxRail Manager log bundle
Collect the VxRail Manager log bundle.
Steps
1. Log in to the VMware vSphere Web Client as an administrator or log in to the VxRail Manager as mystic.
You can SSH to VxRail Manager or log in to the VMware vSphere Web Client and launch the VxRail Manager VM on the web
console.
2. To switch to root, enter:
su root
3. To generate the VxRail Manager log bundle, enter any of the following commands:
/mystic/generateLogBundle.py -v
/mystic/generateLogBundle.py --vxm
/mystic/generateLogBundle.py --types vxm
dellvxm:~ # / mystic/generateLogBundle.py -v
Start to collect log bundle.
types: vxm
The request id for collecting log bundle is
a2a3f85e-408a-4de4-96dc-1c8dfd3e17fa
Start looping a2a3f85e-408a-4de4-96dc-1c8dfd3e17fa until request finished.
Collect log bundle successfully.
Please find the generated log bundle here:
/tmp/mystic/dc/
VxRail_Support_Bundle_525a98a0-8e7b-5849-4946-6bc80cb64731_2022_08_05_21_41_13.zip
4. To verify the log bundle file, enter:
124
Collect log bundles
ls -l <file_path>
For example:
ls -l /tmp/mystic/dc/
VxRail_Support_Bundle_525a98a0-8e7b-5849-4946-6bc80cb64731_2022_08_05_21_41_13.zip
-rw-rw-rw- 1 root root 64092471 August 5 21:41 /tmp/mystic/dc/
VxRail_Support_Bundle_525a98a0-8e7b-5849-4946-6bc80cb64731_2022_08_05_21_41_13.zip
Collect log bundles from VxRail Manager
Steps
1. Log in to the VMware vSphere Web Client as an administrator.
2. From the Inventory icon, select the VMware vSAN cluster or the VMware vSphere cluster.
3. Click the Configure tab and select VxRail > Troubleshooting.
4. Under Log Collection, click CREATE and select the log types.
5. When finished, select the generated log bundle and click Download.
The <witness> log type is only for 2-node robot environment and does not work in a normal cluster configuration.
Collect the VMware vCenter Server log bundle
Collect the VMware vCenter Server log bundle.
Steps
1. Log in to the VxRail Manager as mystic.
2. To switch to root, enter:
su root
3. To generate the VMware vCenter Server log bundle, enter any of the following commands:
/mystic/generateLogBundle.py -c
Collect log bundles
125
/mystic/generateLogBundle.py --vcenter
/mystic/generateLogBundle.py --types vcenter
Wait for the command to finish. The following file path displays:
dellvxm:~ # / mystic/generateLogBundle.py -c
Start to collect log bundle.
types: vcenter
The request id for collecting log bundle is
0e9d3cb3-89c3-49d7-921c-b35fed410fe1
Start looping 0e9d3cb3-89c3-49d7-921c-b35fed410fe1 until request finished.
Collect log bundle successfully.
Please find the generated log bundle here:
/tmp/mystic/dc/
VxRail_Support_Bundle_525a98a0-8e7b-5849-4946-6bc80cb64731_2022_08_18_14_29_49.zip
4. To verify the VMware vCenter Servers log bundle file, enter:
ls -l <file_path>
dellvxm:~ # ls -l /tmp/mystic/dc/
VxRail_Support_Bundle_525a98a0-8e7b-5849-4946-6bc80cb64731_2022_08_05_21_57_54.zip
-rw-rw-rw- 1 root root 246578225 August 5 21:58 /tmp/mystic/dc/
VxRail_Support_Bundle_525a98a0-8e7b-5849-4946-6bc80cb64731_2022_08_05_21_57_54.zip
Collect the VMware ESXi log bundle
Collect the VMware ESXi log bundle.
Steps
1. Log in to the VxRail Manager CLI as mystic.
2. To switch to root, enter:
su root
3. To generate the VMware ESXi log bundle, enter any of the following commands:
/mystic/generateLogBundle.py -e
/mystic/generateLogBundle.py --esxi
/mystic/generateLogBundle.py --types esxi
Wait for the command to finish. The following file path displays:
dellvxm:~ # / mystic/generateLogBundle.py -e
Start to collect log bundle.
types: esxi
The request id for collecting log bundle is
2807b8ca-5d84-4578-9409-d6eb5389ff8b
Start looping 2807b8ca-5d84-4578-9409-d6eb5389ff8b until request finished.
Collect log bundle successfully.
Please find the generated log bundle here:
/tmp/mystic/dc/
VxRail_Support_Bundle_525a98a0-8e7b-5849-4946-6bc80cb64731_2022_08_05_22_15_46.zip
4. To verify the VMware ESXi log bundle file, enter:
ls -l <file_path>
dellvxm:~ # ls -l /tmp/mystic/dc/
VxRail_Support_Bundle_525a98a0-8e7b-5849-4946-6bc80cb64731_2022_08_05_22_15_46.zip
-rw-rw-rw- 1 root root 3019014 August 5 22:27 /tmp/mystic/dc/
VxRail_Support_Bundle_525a98a0-8e7b-5849-4946-6bc80cb64731_2022_08_05_22_15_46.zip
126
Collect log bundles
If the vSAN encryption is enabled, the following warning messages displays while performing ESXi log collection: Failed
to generate esxi log bundle on host <hostname> due to internal error. See KB000200163.
vxm:~ # /mystic/generateLogBundle.py -e
Start to collect log bundle.
types: esxi
The request id for collecting log bundle is
7c260275-1921-4e2f-8408-95d6cef88a35
Start looping 7c260275-1921-4e2f-8408-95d6cef88a35 until request finished.
Failed to generate esxi log bundle on host esx-c.122-powerx.dell.com due to internal
error. See KB000200163.
Failed to generate esxi log bundle on host esx-a.122-powerx.dell.com due to internal
error. See KB000200163.
Failed to generate esxi log bundle on host esx-b.122-powerx.dell.com due to internal
error. See KB000200163.
Collect the iDRAC log bundle
Collect the iDRAC log bundle.
Steps
1. Log in to iDRAC console.
2. To generate the iDRAC log bundle, enter any of the following commands:
/mystic/generateLogBundle.py -i
/mystic/generateLogBundle.py --idrac
/mystic/generateLogBundle.py --types idrac
Wait for the command to finish. The following file path displays:
dellvxm:~ # / mystic/generateLogBundle.py -i
Start to collect log bundle.
types: idrac
The request id for collecting log bundle is
6baa1017-b44f-4c2b-9310-fa1605cc976a
Start looping 6baa1017-b44f-4c2b-9310-fa1605cc976a until request finished.
Collect log bundle successfully.
Please find the generated log bundle here:
/tmp/mystic/dc/
VxRail_Support_Bundle_525a98a0-8e7b-5849-4946-6bc80cb64731_2022_08_05_22_27_49.zip
3. To verify the iDRAC log bundle file, enter:
ls -l <file_path>
dellvxm:~ # ls -l /tmp/mystic/dc/
VxRail_Support_Bundle_525a98a0-8e7b-5849-4946-6bc80cb64731_2022_08_05_22_27_49.zip
-rw-rw-rw- 1 root root 3019014 August 5 22:27 /tmp/mystic/dc/
VxRail_Support_Bundle_525a98a0-8e7b-5849-4946-6bc80cb64731_2022_08_05_22_27_49.zip
Collect the platform log bundle
Collect the platform log bundle.
Steps
1. Log in to the VxRail Manager as mystic.
2. To switch to root, enter:
su root
Collect log bundles
127
3. To generate the platform log bundle, enter any of the following commands:
/mystic/generateLogBundle.py -p
/mystic/generateLogBundle.py --platform
/mystic/generateLogBundle.py --types ptagent
Wait for the command to finish. The following file path displays:
dellvxm:~ # / mystic/generateLogBundle.py -p
Start to collect log bundle.
types: platform
The request id for collecting log bundle is
50661fb1-d552-47ef-be8f-e42ffc08d07f
Start looping 50661fb1-d552-47ef-be8f-e42ffc08d07f until request finished.
Collect log bundle successfully.
Please find the generated log bundle here:
/tmp/mystic/dc/VxRail_Support_Bundle_5291caa7-8938fa82-169c-8b010f5d1658_2022-10-08_12_53_48.zip
4. To verify the PTAgent log bundle file, enter:
ls -l <file_path>
Collect the log bundle with node selection
Collect the log bundle with node selection for VMware ESXi, iDRAC, and platforms.
Steps
1. Log in to the VMware vSphere Web Client as an administrator or log in to the VxRail Manager as mystic.
2. To switch to root, enter:
su root
3. To generate the VMware ESXi log bundle with node selection , enter any of the following commands:
/mystic/generateLogBundle.py -e 2C49DN2, 3F89DN2
/mystic/generateLogBundle.py --esxi --nodes 2C49DN2, 3F89DN2
/mystic/generateLogBundle.py --types esxi --nodes 2C49DN2, 3F89DN2
Wait for the command to finish. The following file path displays:
dellvxm:~ # / mystic/generateLogBundle.py -e --nodes 2C49DN2, 3F89DN2
Start to collect log bundle.
types: esxi
nodes: 2C49DN2, 3F89DN2
The request id for collecting log bundle is
e8778824-2cc8-407d-9912-be8d73261d85
Start looping e8778824-2cc8-407d-9912-be8d73261d85 until request finished.
Collect log bundle successfully.
Please find the generated log bundle here:
/tmp/mystic/dc/
VxRail_Support_Bundle_525a98a0-8e7b-5849-4946-6bc80cb64731_2022_08_07_09_34_21.zip
4. To verify the log bundle file, enter:
ls -l <file_path>
dellvxm:~ # ls -l /tmp/mystic/dc/
VxRail_Support_Bundle_525a98a0-8e7b-5849-4946-6bc80cb64731_2022_08_07_09_34_21.zip
-rw-rw-rw- 1 root root 485734016 August 7 09:34 /tmp/mystic/dc/
VxRail_Support_Bundle_525a98a0-8e7b-5849-4946-6bc80cb64731_2022_08_07_09_34_21.zip
128
Collect log bundles
If the vSAN encryption is enabled, the following warning messages displays while performing ESXi log collection: Failed
to generate esxi log bundle on host <hostname> due to internal error. See KB000200163.
vxm:~ # /mystic/generateLogBundle.py -e --nodes 4100003
Start to collect log bundle.
types: esxi
nodes: 4100003
The request id for collecting log bundle is
55d2c7db-0247-4842-bd06-0deea9b8bc35
Start looping 55d2c7db-0247-4842-bd06-0deea9b8bc35 until request finished.
Failed to generate esxi log bundle on host esx01.poda.powerx.dell.com due to internal
error. See KB000200163.
Collect the log bundle with component selection
Collect the log bundle with component selection for VxRail Manager, VMware vCenter Server, VMware ESXi, iDRAC, platform,
and witness.
Steps
1. Log in to the VMwarev Sphere Web Client as an administrator or log in to the VxRail Manager as mystic.
2. To switch to root, enter:
su root
3. To generate the log bundle with VxRail Manager and VMware vCenter Server types selected, enter any of the following
commands:
/mystic/generateLogBundle.py -vc
/mystic/generateLogBundle.py -v -c
/mystic/generateLogBundle.py --vxm --vcenter
/mystic/generateLogBundle.py --types vxm,vcenter
Wait for the command to finish. The following file path displays:
dellvxm:~ # / mystic/generateLogBundle.py -vc
Start to collect log bundle.
types: vxm,vcenter
The request id for collecting log bundle is
15e2d374-a38f-4296-a1e0-1bc42f3398a4
Start looping 15e2d374-a38f-4296-a1e0-1bc42f3398a4 until request finished.
Collect log bundle successfully.
Please find the generated log bundle here:
/tmp/mystic/dc/VxRail_Support_Bundle_521ffa8e-70f7-793eea1a-8ec8db0fb3a3_2022_08_20_05_04_09.zip
4. To verify the log bundle file, enter:
ls -l <file_path>
dellvxm:~ # ls -l /tmp/mystic/dc/VxRail_Support_Bundle_521ffa8e-70f7-793eea1a-8ec8db0fb3a3_2022_08_20_05_04_09.zip
-rw-rw-rw- 1 root root 691083648 August 20 09:34 /tmp/mystic/dc/
VxRail_Support_Bundle_521ffa8e-70f7-793e-ea1a-8ec8db0fb3a3_2022_08_20_05_04_09.zip
Collect log bundles
129
Collect the full log bundle
Collect the full log bundle.
Steps
1. Log in to the VMwarevSphere Web Client as an administrator or log in to the VxRail Manager as mystic.
2. To switch to root, enter:
su root
3. To generate the full log bundle, enter any of the following commands:
/mystic/generateLogBundle.py
/mystic/generateLogBundle
Wait for the command to finish. The following file path displays:
dellvxm:~ # / mystic/generateLogBundle
Start to collect log bundle.
types: vxm,vcenter,esxi,idrac,platform
The request id for collecting log bundle is
99419c45-3a75-4956-9470-255e94239175
Start looping 99419c45-3a75-4956-9470-255e94239175 until request finished.
Collect log bundle successfully.
Please find the generated log bundle here:
/tmp/mystic/dc/
VxRail_Support_Bundle_525a98a0-8e7b-5849-4946-6bc80cb64731_2022_08_07_14_17_25.zip
4. To verify the full log bundle file, enter:
ls -l <file_path>
dellvxm:~ # ls -l /tmp/mystic/dc/
VxRail_Support_Bundle_525a98a0-8e7b-5849-4946-6bc80cb64731_2022_08_07_14_17_25.zip
-rw-rw-rw- 1 root root 991840517 August 7 14:17 /tmp/mystic/dc/
VxRail_Support_Bundle_525a98a0-8e7b-5849-4946-6bc80cb64731_2022_08_07_14_17_25.zip
If the vSAN encryption is enabled, the following warning messages displays while performing ESXi log collection: Failed
to generate esxi log bundle on host <hostname> due to internal error. See KB000200163.
vxm:~ # /mystic/generateLogBundle.py
Start to collect log bundle.
types: idrac,vcenter,platform,vxm,esxi
The request id for collecting log bundle is
8335787c-2641-48d3-9869-675f20489c38
Start looping 8335787c-2641-48d3-9869-675f20489c38 until request
Collect log budle successfully.
Please find the generated log bundle here:
/tmp/mystic/dc/VxRail_Support_Bundle_52e92049-182f-40abf117-4103dab9dc16_2023-04-06_22_08_37.zip
Warning
Failed to generate esxi log
bundle on host esx01.poda.powerx.dell.com due to internal error.
Failed to generate esxi log
bundle on host esx02.poda.powerx.dell.com due to internal error.
Failed to generate esxi log
bundle on host esx03.poda.powerx.dell.com due to internal error.
130
Collect log bundles
finished.
See KB000200163.
See KB000200163.
See KB000200163.
Collect the witness log bundle
Collect the witness log bundle.
Steps
1. Log in to the VxRail Manager as mystic.
2. To switch to root, enter:
su root
3. To generate the witness log bundle, enter:
/mystic/generateLogBundle.py -w
/mystic/generateLogBundle.py --witness
/mystic/generateLogBundle.py --types witness
Wait for the command to finish. The following file path displays:
dellvxm:~ # / mystic/generateLogBundle.py -w
Start to collect log bundle.
types: witness
The request id for collecting log bundle is
5e4517fc-76f7-400a-85d1-64856a2aa46a
Start looping 5e4517fc-76f7-400a-85d1-64856a2aa46a until request finished.
Collect log bundle successfully.
Please find the generated log bundle here:
/tmp/mystic/dc/VxRail_Support_Bundle_521a4049-edb6-28ef-f7f1ebe7df507143_2022_09_21_05_20_34.zip
4. To verify the witness log bundle file, enter:
ls -l <file_path>
dellvxm:~ # ls -l /tmp/mystic/dc/VxRail_Support_Bundle_521a4049-edb6-28ef-f7f1ebe7df507143_2022_09_21_05_20_34.zip
-rw-rw-rw- 1 root root 236810339 September 21 05:23 /tmp/mystic/dc/
VxRail_Support_Bundle_521a4049-edb6-28ef-f7f1-ebe7df507143_2022_09_21_05_20_34.zip
The parameter <witness> log type is only for 2-node reboot environment and does not work in a normal cluster
configuration.
The witness log bundle is not in the full log bundle collection option. The witness log bundle collection must be performed
separately.
Delete log bundles from VxRail Manager
Steps
1. Log in to the VMware vSphere Web Client as an administrator.
2. From the Inventory icon, select the VMware vSAN cluster or the VMware vSphere cluster.
3. Click the Configure tab and select VxRail > Troubleshooting.
4. Select the generated log bundle and click Delete.
Collect the satellite node log bundles from VxRail
Manager
Steps
1. Log in to the VMware vSphere Web Client as an administrator.
Collect log bundles
131
2. From the Inventory icon, select the cluster that contains the satellite node.
3. Click the Configure tab and select VxRail > Troubleshooting.
4. Click Create and select the log type and the node to start the log collection.
5. Upon successful completion of the log collection, select the generated log bundle and click Download.
Delete the satellite node bundles from VxRail Manager
Steps
1. Log in to the VMware vSphere Web Client as an administrator.
2. From the Inventory icon, select the cluster that contains the satellite node.
3. Click the Configure tab and select VxRail > Troubleshooting.
4. Select the generated log bundle and click Delete.
132
Collect log bundles
8
Manage certificates
To replace the VxRail Manager SSL certificate, use the VMware vCenter Server.
Import VMware vSphere SSL certificates to VxRail
Manager
When you replace the SSL certificates in the VMware vSphere environment of the VxRail cluster, update the VxRail Manager
with the new certificate authorities (CAs). VxRail Manager cannot recognize the VMware vSphere components until the new
CAs are installed. You can also replace multiple certificate and access certificates in VxRail Manager trust store by REST API.
Prerequisites
Verify that the VMware ESXi host network is available during the replacement of the VMware ESXi host certificates into VxRail
Manager.
About this task
CAUTION: Do not perform this task in a VMware VCF environment.
The VxRail-managed VMware vCenter Server manages the VxRail version 8.0.x and later. See the VxRail 8.0.x Support Matrix
for a list of supported versions.
This procedure is intended for Dell Technologies customers, employees, and partners who are authorized to work on a VxRail
cluster.
Steps
1. To retrieve the new CA certificates from the VMware vCenter Server, perform the following:
a. Log in to the VMware vCenter Server.
b. Click Download trusted root CA certificate on the bottom-right corner or right-click the link and save as a ZIP file.
c. A download.zip file is downloaded to your local machine that contains the CA certificates (.<digit> files) and the
revocation lists (.r<digit> files).
).
NOTE: The revocation files are not used in this task.
2. Use FTP or SCP to transfer the download.zip to the VxRail Manager and select the target directory such as /tmp.
3. Log in to the VxRail Manager as mystic.
Manage certificates
133
4. To switch to root, enter:
su root
5. To extract the download.zip file, enter:
cd /tmp
extract download.zip
cd certs
ls *
NOTE: For VxRail, use the files under the lin subfolder.
6. Using the list of CA certificate files from the previous step, for each distinct file name (ignoring the digit extension), convert
the file to a new distinct CA file. The input file is the distinct file name with the largest number as the digit extension. For
example, if the list of certificate filenames is:
1285cf8e.0 1285cf8e.r0
The following file must be converted:
1285cf8e.0
a. To convert the file to DER format and output to a new file, enter:
openssl x509 -outform der -in /tmp/certs/lin/<file>.<highestdigit> -out /tmp/
certs/lin/newcertfile<x>
b. Repeat these steps for each distinct CA certificate file.
7. Using the list of revocation list files from the previous step, for each distinct file name (ignoring the digit extension), rename
the revocation list (r<digit> files) so that the file extension starts from ".r0", while the filename remains the same as before.
For example, if the list of certificate files is:
e2cd3e88.0 e2cd3e88.r1 e2cd3e88.r2
The following files must be renamed:
e2cd3e88.r1 e2cd3e88.r2
a. To rename the files, enter:
mv
/tmp/certs/lin/e2cd3e88.r1 /tmp/certs/lin/e2cd3e88.r0
mv
/tmp/certs/lin/e2cd3e88.r2 /tmp/certs/lin/e2cd3e88.r1
b. Repeat these steps for each distinct revocation list.
8. To copy the new certificate files to /var/lib/vmware-marvin/trust/lin, enter:
cp -f /tmp/certs/lin/* /var/lib/vmware-marvin/trust/lin
9. To change the permission and ownership of the new certificate files, enter:
chmod 777 /var/lib/vmware-marvin/trust/lin/*
chown tcserver:pivotal /var/lib/vmware-marvin/trust/lin/*
10. Select VxRail-Virtual-SAN-Cluster > Configure > VxRail > Health Monitoring and enable the health monitoring status.
11. To restart the marvin and runjars services, enter:
service vmware-marvin restart
systemctl status vmware-marvin
service runjars restart
systemctl status runjars
12. To change the permission on the new certificate files to -rw-r-r--, enter:
chmod 644 /var/lib/vmware-marvin/trust/lin/*
134
Manage certificates
13. To restart the ms-day2 service, obtain the root credentials and switch to root by entering:
su root
a. To start a new instance of ms-day2 service, enter:
kubectl --kubeconfig
/etc/rancher/rke2/rke2.yaml -n helium scale deployment/ms-day2
--replicas=0
kubectl --kubeconfig
/etc/rancher/rke2/rke2.yaml -n helium scale deployment/ms-day2
--replicas=1
b. To check the previous ms-day2 service is terminated, enter:
kubectl get pods | grep
day2
14. To update the VMware ESXi host certificates in VxRail Manager, enter:
cd /mystic/ssl/
python certificate_replacement.py
● If the default VMware vCenter Server management account does not have sufficient permissions to get ESXi host
certificates, use python certificate_replacement.py -u. You can provide another VMware vCenter Server
account.
● Keep the ESXi host network available during the replacement of ESXi host certificates into VxRail Manager. The updated
certificates are stored under the VxRail Manager directory: /var /lib/vmware-marvin/trust/host. If some
hosts fail, check the failed host network according to the failed hosts serial number.
The updated certificates are stored under the VxRail Manager directory - /var/lib/vmware-marvin/trust/host. If
any host fails, check the failed host network according to the failed hosts serial number.
Next steps
For more information about replacing certificates, see KB 77894.
See Managing Certificates Using the vSphere Certificate Manager Utility.
Import the VMware vCenter Server certificates into
the VxRail Manager trust store
When you replace an SSL certificate after deployment, update the VxRail Manager with the new certificate authorities (CAs)
before the VxRail Manager recognizes the VMware vSphere components. This procedure enables you to access the certificates
in the VxRail Manager trust store using REST API.
About this task
This procedure applies to the VxRail cluster running the VxRail version 8.0.x and later.
This procedure is intended for Dell Technologies customers, employees, and partners who are authorized to work on a VxRail
cluster.
CAUTION: Do not perform these steps in a VMware VCF environment.
Steps
1. To get the fingerprint list from the VxRail Manager trust store, perform the following:
a.
b.
c.
d.
Log in to the VxRail onboard API documentation at https://<vxm_ipaddr>/rest/vxm/api-doc.html.
From the VxRail REST API left-menu, select Certificates > Get a list of fingerprints retrieved from....
Enter the username and password, and then click Send Request.
Copy the fingerprint value from the response window.
2. To get the certificate content from a specific fingerprint from the VxRail Manager trust store, perform the following:
Manage certificates
135
a. Log in to the VxRail onboard API documentation at https://<vxm_ipaddr>/rest/vxm/api-doc.html.
b. Go to Certificates and click Search the VxRail Manager trust store.
c. Enter the username, password, fingerprint, and click Send Request.
3. To import the VMware vCenter Server certificates into the VxRail Manager trust store, perform the following:
a.
b.
c.
d.
Log in to the VxRail onboard API documentation at https://<vxm_ipaddr>/rest/vxm/api-doc.html.
Go to Certificates and click Import certificates into the VxRail.
Enter the username and password.
Place the certificate and the certificate revocation list (CRL) content in the body.
For example:
"-----BEGIN CERTIFICATE-----\nMIIEHzCCAwegAwIBAgIJANx5901VXVVVMA0GCSqGSIb3
DQEBCwUAMIGaMQswCQYD\nVQQDDAJDQTEXMBUGCgmSJomT8ixkARkWB3ZzcGhlcmUxFTATBgoJkiaJk/
IsZAEZ\
nFgVsb2NhbDELMAkGA1UEBhMCVVMxEzARBgNVBAgMCkNhbGlmb3JuaWExHDAaBgNV\nBAoME2M0LXZjLnJhY2t
M
MDMubG9jYWwxGzAZBgNVBAsMElZNd2FyZSBFbmdpbmVl\ncmluZzAeFw0yMjAzMjcwNjA3NTVaFw0zMjAzMjQw
N
jA3NTVaMIGaMQswCQYDVQQD\nDAJDQTEXMBUGCgmSJomT8ixkARkWB3ZzcGhlcmUxFTATBgoJkiaJk/
IsZAEZFg
Vs\nb2NhbDELMAkGA1UEBhMCVVMxEzARBgNVBAgMCkNhbGlmb3JuaWExHDAaBgNVBAoM\nE2M0LXZjLnJhY2tM
M
DMubG9jYWwxGzAZBgNVBAsMElZNd2FyZSBFbmdpbmVlcmlu\nZzCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCA
Q
oCggEBALSoNvUmgFYouBS6qjgp\nwb8NZdeT1Gv4r2/wbWNr332iP1A/ffv5Kq66AbaaNDu+0G6NSsdh/
IPDI31
YtaAP\n0VN7xvwuUJsYeCCwzldQE3tm/M4Xe0h/Tw//GodYRIkC/
5uYxKxm4hRCPu7Qvs8/\n2q1ypGclpzj5U5
lXOoxHy4JsmX9Argqee3F0mT9l0bHqGBlNu+cWtK0Hwh7eTaUj\nyhJ+pHVf8SHvQQnxIYSlo1e0o3lQnGv+TX
c
LctbKzmsHMPVjYOletqs/
9aCSsEgO\ncxhjSIxGwwgRI5BLGhakoLXHznyWsJ81vc0TBvMock2hPOV7VOhGpNib
BMB6Fz+j\nC3cCAwEAAaNmMGQwHQYDVR0OBBYEFCaeddsZQeRukQL/
pfUX2MbCFk30MB8GA1Ud\nEQQYMBaBDmV
tYWlsQGFjbWUuY29thwR/AAABMA4GA1UdDwEB/wQEAwIBBjASBgNV\nHRMBAf8ECDAGAQH/
AgEAMA0GCSqGSIb3
DQEBCwUAA4IBAQBbbnY6I8d/qVCExT89\nthbae5n81hzFFtL0t36HzmSkcCLZnU/
w8cWuILabJCSYbJYREGcGr
vKkplF9Bfsp\nw/
u4Y1nwHrLWmfX1spNWgEWFGbSzE2qxFLIozNBKcMS1+CvZP6fIc1CfqjvMTEt2\nyNGbR+gt
BG5Are3K6VMZPihSCcWqu7XMsX9yCVdpOFCbV5m27JxYMwleOA220io6\nI3PJVAvCsRNoaBu7UiWEmjAsqj0m
1
v4+c3XG+2QquJ6CGHrfgoxGQDormUXGbxvp\neUq86TgxcbH76LzmLTywJzQ/
DFYm3bBHOgzCH2F0Ra7jz46gnu
uOPqWtJ4pU1Ghj\nm2rf\n-----END CERTIFICATE-----\n-----BEGIN X509 CRL-----\nMIICFTCB/
gIB
ATANBgkqhkiG9w0BAQsFADCBmjELMAkGA1UEAwwCQ0ExFzAVBgoJ\nkiaJk/
IsZAEZFgd2c3BoZXJlMRUwEwYKC
ZImiZPyLGQBGRYFbG9jYWwxCzAJBgNV\nBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMRwwGgYDVQQKDBNjN
C
12Yy5yYWNr\nTDAzLmxvY2FsMRswGQYDVQQLDBJWTXdhcmUgRW5naW5lZXJpbmcXDTIyMDMzMTAx\nNTc1NVoX
D
TIyMDQzMDAxNTc1NVqgLzAtMAoGA1UdFAQDAgEFMB8GA1UdIwQYMBaA\nFCaeddsZQeRukQL/
pfUX2MbCFk30MA
0GCSqGSIb3DQEBCwUAA4IBAQBJ4QhmJQb/\nl/lU9FhYGcQEgFyBFEH9d6G2y66yPrJ/
40sCpUb7JMkdr7l2bYN
n1eRHljYBEkrx\n9KMX/
l5RkG+JTeZdHWkGQNB3U+qFvNANUYuOXYPwRoCVgiAoKs98YMzx8TKcluOE\nsHa8Ur
Cx5fy1gvPsreK9ODxdU9CpNjavfcV2sFkw07mmCDGGvX9GUc7y5JtFH50y\nAcVKVisZ5sT1yHRlJ0MOg1NGM0
8
VV2DpHUaZmNh7MgEx8/hNJlz2skQ0Zc8EVEzR\n3ULUC3/
djyXZP3QQ3PlKRgwaziPq8kRk+8jQby8ZipMtW4IH
S2WvvFvPDXWzgH/J\nE6TJVaqfezuc\n-----END X509 CRL-----"
136
Manage certificates
You can also place only the certificate content in the body.
For example:
"-----BEGIN CERTIFICATE-----\nMIIEHzCCAwegAwIBAgIJANx5901VXVVVMA0GCSqGS
Ib3DQEBCwUAMIGaMQswCQYD\nVQQDDAJDQTEXMBUGCgmSJomT8ixkARkWB3ZzcGhlcmUxFTATBgoJkiaJk/
IsZA
EZ\nFgVsb2NhbDELMAkGA1UEBhMCVVMxEzARBgNVBAgMCkNhbGlmb3JuaWExHDAaBgNV\nBAoME2M0LXZjLnJh
Y
2tMMDMubG9jYWwxGzAZBgNVBAsMElZNd2FyZSBFbmdpbmVl\ncmluZzAeFw0yMjAzMjcwNjA3NTVaFw0zMjAzM
j
QwNjA3NTVaMIGaMQswCQYDVQQD\nDAJDQTEXMBUGCgmSJomT8ixkARkWB3ZzcGhlcmUxFTATBgoJkiaJk/
IsZAE
ZFgVs\nb2NhbDELMAkGA1UEBhMCVVMxEzARBgNVBAgMCkNhbGlmb3JuaWExHDAaBgNVBAoM\nE2M0LXZjLnJhY
2
tMMDMubG9jYWwxGzAZBgNVBAsMElZNd2FyZSBFbmdpbmVlcmlu\nZzCCASIwDQYJKoZIhvcNAQEBBQADggEPAD
C
CAQoCggEBALSoNvUmgFYouBS6qjgp\nwb8NZdeT1Gv4r2/wbWNr332iP1A/ffv5Kq66AbaaNDu+0G6NSsdh/
IPD
I31YtaAP\n0VN7xvwuUJsYeCCwzldQE3tm/M4Xe0h/Tw//GodYRIkC/
5uYxKxm4hRCPu7Qvs8/\n2q1ypGclpzj
5U5lXOoxHy4JsmX9Argqee3F0mT9l0bHqGBlNu+cWtK0Hwh7eTaUj\nyhJ+pHVf8SHvQQnxIYSlo1e0o3lQnGv
+
TXcLctbKzmsHMPVjYOletqs/
9aCSsEgO\ncxhjSIxGwwgRI5BLGhakoLXHznyWsJ81vc0TBvMock2hPOV7VOhGp
NibBMB6Fz+j\nC3cCAwEAAaNmMGQwHQYDVR0OBBYEFCaeddsZQeRukQL/
pfUX2MbCFk30MB8GA1Ud\nEQQYMBaB
DmVtYWlsQGFjbWUuY29thwR/AAABMA4GA1UdDwEB/wQEAwIBBjASBgNV\nHRMBAf8ECDAGAQH/
AgEAMA0GCSqGS
Ib3DQEBCwUAA4IBAQBbbnY6I8d/qVCExT89\nthbae5n81hzFFtL0t36HzmSkcCLZnU/
w8cWuILabJCSYbJYREG
cGrvKkplF9Bfsp\nw/
u4Y1nwHrLWmfX1spNWgEWFGbSzE2qxFLIozNBKcMS1+CvZP6fIc1CfqjvMTEt2\nyNGbR
+gtBG5Are3K6VMZPihSCcWqu7XMsX9yCVdpOFCbV5m27JxYMwleOA220io6\nI3PJVAvCsRNoaBu7UiWEmjAsq
j
0m1v4+c3XG+2QquJ6CGHrfgoxGQDormUXGbxvp\neUq86TgxcbH76LzmLTywJzQ/
DFYm3bBHOgzCH2F0Ra7jz46
gnuuOPqWtJ4pU1Ghj\nm2rf\n-----END CERTIFICATE-----"
e. Click Send Request.
4. To delete the VMware vCenter Server certificate and the CRL files by a specific fingerprint from the trust store, perform the
following:
a. Log in to the VxRail onboard API documentation at https://<vxm_ipaddr>/rest/vxm/api-doc.html.
b. Go to Certificates and click Delete the certificate file from....
c. Enter the username, password, fingerprint, and click Send Request.
Import the VMware ESXi host certificates to VxRail
Manager
Import the VMware ESXi host certificates to VxRail Manager.
Prerequisites
● Verify that the VMware ESXi host network is available when you replace the VMware ESXi host certificates into VxRail
Manager.
● Obtain the root password.
About this task
After the VxRail initial deployment, if you replace VMware ESXi certificates in the VxRail clusters VMware vSphere environment,
you must also import them into the VxRail Manager. You can import multiple certificates simultaneously.
Manage certificates
137
See the VxRail 8.0.x Support Matrix for a list of supported versions.
This procedure is intended for Dell Technologies customers, employees, and partners who are authorized to work on a VxRail
cluster.
Steps
1. Log in to the VxRail Manager as mystic.
2. To switch to root, enter:
su root
3. To replace certificates on a node, enter:
cd /mystic/ssl/
python certificate_replacement.py -sn <node_sn1> <node_sn2>
The updated certificates are stored under the /var/lib/vmware-marvin/trust/host directory in VxRail Manager. If
a host fails, check the failed host network using the failed hosts serial number.
When the update is complete, the following table shows results:
Result
Description
new
The first time that you download the VMware ESXi host certificate.
update
The VMware ESXi host certificate that is downloaded is different from the original one. The VMware
ESXi host certificate is updated successfully.
identical
The VMware ESXi host certificate that is downloaded is identical from the original one. No action is
required.
4. To manually import the VMware vCenter Server SSL certificate on the VxRail Manager, see KB 000077894.
138
Manage certificates
9
Rename VxRail components
You can use the VMware vCenter Server to rename many components. Links to additional procedures are provided.
Use the VMware vCenter Server to rename the following components:
● VxRail data center
● VM folder
● VxRail cluster
● Witness port group - 2-node cluster
● VMware ESXi hostname and IP address
Use the following links to rename other VxRail components:
● To rename the VMware VDS or dvPortGroup, see Renaming a VMware VDS/dvPortGroup while virtual machines are
connected.
● To rename the vSAN datastore, see Rename the VxRail vSAN Datastore
● To rename the VxRail VM, VxRail-managed VMware vCenter Server Appliance, and customer-managed VMware vCenter
Server Appliance, see General Virtual Machine Options
Change the FQDN of the VMware vCenter Server
Appliance
Change the VMware vCenter Server Appliance FQDN and complete the VxRail management configuration.
Prerequisites
Before you change the FQDN of VMware vCenter Servers, perform the following:
● Back up the VMware vCenter Server in the SSO domain.
● Before the FQDN changes, unregister the products that are registered from the VMware vCenter Server. Once the FQDN
change is complete, they can be re-registered.
● Delete the VMware vCenter High Availability (vCHA) configuration and reconfigure after the changes.
● If you rename the VMware vCenter Server, rejoin it to the Microsoft AD.
● Verify that the FQDN or hostname resolves to the provided IP address (DNS A records).
● Do not unregister the VxRail VMware vCenter Server plug-in.
About this task
This procedure applies to the VxRail 8.0.x and later cluster.
The VMware vCenter Server manages the VxRail 8.0.x and later. See the VxRail 8.0.x Support Matrix for a list of supported
versions.
The following table shows supported features:
Supported features
Not supported features
Enhanced Linked Mode (ELM)
Pure IP customer-managed VMware vCenter Server without FQDN
Change FQDN to a different domain
Change the VMware vCenter Server Appliance FQDN on VMware VCF.
This procedure is intended for Dell Technologies customers, employees, and partners who are authorized to work on a VxRail
cluster.
Rename VxRail components
139
Steps
1. For internal DNS, configure the VxRail Manager to add the VMware vCenter Server Appliance FQDN DNS record and
perform the following:
a. Using SSH, log in to the VxRail Manager as mystic and su to root.
b. Update /etc/hosts and add an entry to the VMware vCenter Server FQDN. In the following example, the FQDN entry
is 172.16.10.211 vcnew.testfqdn.local vcnew.
127.0.0.1 localhost localhost.localdom
172.16.10.211 vc.testfqdn.local vc
172.16.10.211 vcnew.testfqdn.local vcnew
172.16.10.150 vxm.testfqdn.local vxm
172.16.10.111 vcluster101-esx01.testfqdn.local vcluster101-esx01
172.16.10.112 vcluster101-esx02.testfqdn.local vcluster101-esx02
172.16.10.113 vcluster101-esx03.testfqdn.local vcluster101-esx03
c. To restart the dnsmasq service, enter:
systemctl restart dnsmasq
2. To update the FQDN in VMware Application Management Interface (VAMI), perform the following:
a.
b.
c.
d.
e.
f.
Log in to the VMware vCenter Server as root on port 5480.
Click Networking.
Under Network Settings, click Edit.
In the wizard, select NIC 0 (management network) and click NEXT.
Change the VMware vCenter Server hostname or FQDN to its new name and click NEXT.
Enter the SSO administrator (nonroot user) credentials for the VMware vCenter Server.
NOTE: Do not use the root credentials to log in to the VMware vCenter Server.
g. Review the changes that are made to the VMware vCenter Servers FQDN and IP address settings.
h. Acknowledge that the VMware vCenter Server backup is performed.
Perform these additional steps after the FQDN of the VMware vCenter Server is changed. Do not unregister the VxRail
VMware vCenter Server plug-in.
3. Wait for the FQDN change procedure to complete.
After the changes are complete, an alert displays, allowing the automatic redirection back to the VAMI on port 5480 within
10 s. Click Redirect Now to skip the automatic redirect.
4. Log in to the VMware vCenter Server as root on port 5480 and confirm that the configuration is complete.
5. To renew the node certificates in the VMware vSphere Web Client, perform the following:
a. From the Inventory icon, select a VxRail cluster.
b. Click the Configure tab and select VxRail > Certificate.
c. Under Certificate Management, click Update.
6. To restart the VxRail-platform-service for each node, perform the following:
a.
b.
c.
d.
From the Inventory icon, select a VxRail cluster.
Select a host in the VxRail cluster.
Click the Configure tab and select System and Certificate.
Select System > Services and select VxRail-platform-service to restart the service.
7. (OPTIONAL) To update the VxRail Manager database for the TLD change, perform the following:
a. Connect to the database and enter:
psql -U postgres vxrail
b. To confirm your existing TLD, enter:
select * from configuration.configuration where key='system_tld';
c. To update your new FQDN value, enter:
update configuration.configuration set value='new_FQDN' where key='system_tld';
d. To verify your new FQDN, enter:
140
Rename VxRail components
select * from configuration.configuration where key='system_tld';
8. To update the VMware vCenter Server Appliance FQDN information in the VxRail Manager using the root credentials,
perform the following:
a. To obtain the existing VMware vCenter Server host value, enter:
curl --location --request GET 'http://127.0.0.1/rest/vxm/internal/configservice/v1/
configuration/keys/vcenter_host' --header 'Content-Type: application/json' --unixsocket /var/lib/vxrail/nginx/socket/nginx.sock
b. To update the VMware vCenter Server host value with the new FQDN, enter:
curl --location --request PUT 'http://127.0.0.1/rest/vxm/internal/configservice/v1/
configuration/keys/vcenter_host' --header 'Content-Type: application/json'
--unix-socket /var/lib/vxrail/nginx/socket/nginx.sock --data-raw '{"value":
"<New_vcenter_FQDN>"}'
9. Using the API, to download and update the certificates, enter:
curl -k -X POST -H "Content-Type: application/json" --unix-socket /var/lib/vxrail/
nginx/socket/nginx.sock -d @- << EOF http://localhost/rest/vxm/internal/operation/v1/vxm/
download-vc-certs/execute
{
"vc_info": {
"host": "<New_VC_FQDN>",
"username": "administrator@vsphere.local",
"password": "<password>",
"port": 443
},
"auto_accept_vc_cert": true
}
EOF
{"result": {"vc_certificate_management_mode": "vmca", "vc_certificate": {"type":
"VC", "valid": true, "thumbprint":
"B7:43:CE:13:84:92:FA:0D:FF:03:ED:E7:B7:BB:48:09:D4:24:FF:5C", "data": {"validity":
{"from": 1667109980, "to": 1982469980}, "public_key": {"algorithm": "rsaEncryption",
"modulus":
"00:9b:ae:38:58:c4:8f:97:59:e8:c8:d5:28:ca:aa:1a:7e:d3:46:5d:c9:ad:e2:22:22:3f:48:32:8
8:17:3c:3f:2c:85:52:b8:a7:c7:69:6e:9d:61:b7:eb:24:c9:80:91:07:9c:43:9e:1f:01:46:09:b6:
44:2d:34:77:ff:6f:ed:d7:fd:5b:65:c1:e8:85:c8:51:86:4b:ae:b5:96:fd:c6:5e:03:81:1d:da:3a
:b8:8c:86:2a:9e:99:19:48:1f:16:37:41:bb:27:f2:ec:c8:e0:f5:1d:49:8e:80:df:49:c4:0b:de:1
a:61:5a:0a:9b:f6:9c:9c:5e:3c:24:84:e2:da:58:fe:c8:90:02:70:12:78:e8:21:47:4e:19:79:49:
0a:3b:3a:12:87:9b:ed:9e:45:01:b2:93:c6:ec:b5:4e:6e:a4:c8:37:25:69:df:21:e7:e7:34:d4:6e
:0a:fe:f1:83:b6:ce:31:5d:8c:37:61:8a:98:fb:e6:51:0b:98:48:9c:4c:ad:41:65:f7:47:d6:2b:1
7:72:be:80:ee:97:47:b6:3b:98:0f:b5:9e:d3:fa:8d:c3:b3:e3:70:d6:15:dd:8d:32:2a:b9:83:3d:
3b:85:3f:5d:cc:2d:44:db:f7:e0:40:83:a9:f0:be:97:6d:43:19:9d:e4:a3:12:af:1c:c4:17:cc:15
:28:8b:81:a0:8e:ba:1e:dd:e9:68:83:51:c4:69:5c:39:b2:c6:74:d2:b6:c3:dc:9b:27:65:53:6d:6
7:a5:ae:25:07:ab:8f:de:ed:f7:6f:b0:f7:71:7f:8d:ee:30:20:3c:a5:c4:2c:9a:93:dd:71:72:ba:
0c:08:70:8a:16:a0:2e:66:cf:34:ad:b7:b0:85:e7:7d:90:83:b0:b3:24:cb:8d:6b:16:6c:65:5c:72
:f2:45:95:dc:6c:37:01:06:c9:ad:4c:12:a1:4d:74:c4:97:eb:17:5b:50:d0:00:66:3e:fc:c8:d8:f
c:27:d9:e1:3a:16:b2:21:ef:a6:5b:c1:c9", "length": "(3072 bit)"}, "extensions":
{"key_usage": "Certificate Sign, CRL Sign", "subject_alternative_name":
"email:email@acme.com, IP Address:127.0.0.1", "subject_key_identifier":
"DF:DD:0C:91:75:92:26:B6:A8:4E:74:2B:A3:D9:27:4E:40:DD:DD:68"}, "version": "3 (0x2)",
"serial_number": "e9:68:06:7b:75:59:ce:bd", "signature_algorithm":
"sha256WithRSAEncryption", "issuer": "CN = CA, DC = vsphere, DC = local, C = US, ST =
California, O = c3-vc.rackH04.local, OU = VMware Engineering", "subject": "CN = CA,
DC = vsphere, DC = local, C = US, ST = California, O = c3-vc.rackH04.local, OU =
VMware Engineering"}}}}
10. To restart the vmware-marvin service, enter:
systemctl restart runjars
11. Clear the cache to ensure that the VxRail Manager information is updated correctly.
12. To generate a base64 string for the username:password, enter:
echo -n "administrator@vsphere.local:password" | base64
Rename VxRail components
141
YWRtaW5pc3RyYXRvckB2c3BoZXJlLmxvY2FsOnBhc3N3b3Jk
13. To create a POST request in the VxRail Manager, enter:
curl --location --request POST 'http://127.0.0.1/rest/vxm/private/pv/cache/'
--header 'Content-Type: application/json' --header 'Authorization: Basic
YWRtaW5pc3RyYXRvckB2c3BoZXJlLmxvY2FsOnBhc3N3b3Jk' -k
Clear the cache in the VxRail Manager with the authorization basic string generated in Step 12.
14. (OPTIONAL) For internal DNS only, to clean up the old VMware vCenter Server FQDN records, perform the following:
a. Using SSH, log in to the VxRail Manager as root.
b. Update the /etc/hosts file. Remove the unused entry for the VMware vCenter Server new FQDN. In the following
example, the old FQDN entry is 172.16.10.211 vc.testfqdn.local vc.
127.0.0.1 localhost localhost.localdom
172.16.10.211 vc.testfqdn.local vc<-- Delete the unused entry
172.16.10.211 vcnew.testfqdn.local vcnew
172.16.10.150 vxm.testfqdn.local vxm
172.16.10.111 vcluster101-esx01.testfqdn.local vcluster101-esx01
172.16.10.112 vcluster101-esx02.testfqdn.local vcluster101-esx02
172.16.10.113 vcluster101-esx03.testfqdn.local vcluster101-esx03
c. To restart the dnsmasq service, enter:
systemctl restart dnsmasq
Next steps
For more information, see:
● KB 000077894 to manually import the VMware vCenter Server SSL certificate on the VxRail Manager.
● Managing Certificates Using the vSphere Certificate Manager Utility
● Changing your vCenter Server's FQDN
142
Rename VxRail components
10
Remove VxRail nodes
Remove nodes to decommission the older generation of VxRail nodes and migrate them to the new generation VxRail.
This procedure applies to the VxRail cluster running the VxRail version 8.0.x and later.
NOTE: VxRail version 8.0.010 does not support VMware vSAN ESA or satellite nodes.
This procedure is intended for Dell Technologies customers, employees, and partners who are authorized to work on a VxRail
cluster.
CAUTION: You cannot use this task to replace a node. Node removal does not destroy the VxRail cluster.
Prerequisites
●
●
●
●
Disable the remote support connectivity, if enabled.
Verify that the VxRail cluster is in a healthy state.
Add new nodes into the cluster before running the node removal procedure to avoid any capacity or node limitations.
Verify that the VxRail cluster has enough nodes remaining after the node removal to support the current Failure to Tolerate
(FTT) setting:
● The following table lists the minimum number of VMware ESXi nodes in the VxRail cluster before node removal:
VMware vSAN RAID and FTT
Minimum nodes
RAID 1, FTT = 1
4
RAID 1, FTT = 2
6
RAID 5, FTT = 1 (For All flash VxRail only)
5
RAID 6, FTT = 2 (For All flash VxRail only)
7
Verify the VxRail cluster health
Verify the VxRail cluster health status.
Steps
1. Log in to the VMware vSphere Web Client as an administrator.
2. Select a cluster and click the Monitor tab.
3. Select vSAN > Skyline Health.
4. If alarms display, acknowledge the Reset to Green at the node and cluster levels before you remove the node.
Verify the capacity, CPU, and memory requirements
Before removing the node, verify that the capacity, CPU, and memory are sufficient to allow the VxRail cluster to continue
running without any issue.
About this task
If the VMware vSAN used capacity percentage is over 80 percent, do not remove the node as it may lead to the VMware vSAN
performance issue.
Remove VxRail nodes
143
Use the following formula to determine whether cluster requirements can be met after the node removal: VSAN used
Capacity % = used total / (current capacity - capacity to be removed)
Steps
1. To view capacity for the cluster, log in to the VMware vSphere Web Client as administrator, and perform the following:
a. Under the Inventory icon, select the VMware vSAN cluster and click the Monitor tab.
b. Select vSAN > Capacity.
2. To check the impact of data migration on a node, perform the following:
a. Select vSAN > Data Migration Pre-check.
b. From the SELECT OBJECT drop-down, select the host.
c. From the vSAN data migration drop-down, select Full data migration and click PRE-CHECK.
3. To view disk capacity, perform the following:
a. Select the VMware vSAN cluster and click the Configure tab.
b. Select vSAN > Disk Management to view capacity.
Use the following formulas to compute percentage used:
CPU_used_% = Consumed_Cluster_CPU /(CPU_capacity - Plan_to_Remove_CPU_sum)
Memory_used_% = Consumed_Cluster_Memory /(Memory_capacity - Plan_to_Remove_Memory_sum)
4. To view to view the CPU and memory overview, perform the following:
a. Select the VMware vSAN cluster and click Monitor tab.
b. Select Resource Allocation > Utilization.
5. To check the CPU and memory resources on a node, perform the following: click the node and select Hardware.
a. Select the node and click the Summary tab.
b. View the Hardware window for CPU, memory, Virtual Flash Resource, Networking, and Storage.
Remove the node
Place the node into maintenance mode before you remove the node.
Prerequisites
Before you remove the node, perform the following steps to place the node in to maintenance mode:
1. Log in to the VMware vSphere Web Client as an administrator.
2. Under the Inventory icon, right-click host that you want to remove and select Maintenance Mode > Enter Maintenance
Mode.
3. In the Enter Maintenance Mode dialog, check Move powered-off and suspended virtual machines to other hosts in
the cluster.
4. Next to vSAN data migration, from the drop-down menu, select Full data migration and click GO-TO PRECHECK.
5. Verify that the test was successful and click ENTER MAINTENANCE MODE and click OK.
6. To monitor the VMware vSAN resyncing, click the cluster name and select Monitor > vSAN > Resyncing Objects.
Steps
1. To remove the host from the VxRail cluster, perform the following:
a. Select the cluster and click the Configure tab.
b. Select VxRail > Hosts.
c. Select the host and click REMOVE.
2. In the Remove Host from Cluster window, enter the VMware vCenter Server administrator and root account information.
3. After the account information is entered, click VERIFY CREDENTIALS .
4. When the validation is complete, click APPLY to create the Run Node Removal task.
5. After the precheck successfully completes, the host shuts down and is removed.
6. For L3 deployment: If you have removed all the nodes of a segment, select the unused port group on VMware VDS and click
Delete.
144
Remove VxRail nodes
Next steps
To
●
●
●
access the SSH, perform the following:
Log in to the VMware vCenter Server Management console as root.
From the left-menu, click Access.
From the Access Settings page, click EDIT and enable SSH.
If a DNS resolution issue occurs after you removed the node or you added the same removed node back into the cluster but
with a new IP address, on the VMware vCenter Server, to update dnsmasq, enter:
# service dnsmasq restart
Reboot VxRail nodes
Reboot the nodes from a cluster.
About this task
You can reboot hosts immediately or schedule a reboot.
Steps
1. From the VMware vSphere Web Client, select the Inventory icon.
2. Select a VxRail host and click the Configure tab.
3. Select VxRail > Hosts.
4. From the Cluster Hosts window, check the hosts that you want to reboot and click REBOOT.
5. For Reboot Hosts, select Reboot Now and click Next.
6. On the Prechecks window, view the prechecks and click NEXT.
7. On the Summary window, click REBOOT NOW.
Remove VxRail nodes
145
11
Restore the VMware vCenter Server from a
file-based backup
Use a current file-based backup to restore the VMware vCenter Server in the original cluster.
Prerequisites
● Create a file-based backup.
● See System Requirements for the vCenter Server Appliance and Platform Services Controller Appliance to verify that your
system meets the minimum software and hardware requirements.
● Perform Download and Mount the vCenter Server Installer.
● To restore a VMware vCenter Server HA cluster, first power off the active, passive, and witness nodes.
● Verify that the target VMware ESXi host is in lockdown or maintenance mode and that it is not part of a fully automated
DRS cluster.
● Check if the DRS cluster of a VMware vCenter Server inventory has a VMware ESXi host that is not in lockdown or
maintenance mode.
● Configure the forward and reverse DNS records for the IP address before you assign a static IP address to the VMware
vCenter Server Appliance.
● Power off the backed-up VMware vCenter Server.
About this task
Deploy the OVA file from the VMware vCenter Server Appliance UI installer during the restoration process:
● Use the VMware vSphere Web Client or VMware Host Client to deploy the OVA file for the new VMware vCenter Server
Appliance or Platform Services Controller appliance as an alternative to using the UI installer for the first stage of the restore
process.
● Use the VMware vSphere Web Client to deploy the OVA file on a VMware ESXi host or VMware vCenter Server instance 5.5
or 6.0. Once the deployment is complete, log in to the appliance management interface of the newly deployed appliance to
proceed with the second stage of the restore process.
This procedure applies to the VxRail cluster running VxRail version 8.0.100 and later. See the VxRail 8.0 Support Matrix for a list
of the supported versions.
Steps
1. Log in to the VMware vSphere Web Client as an administrator.
2. Select the Inventory icon.
3. Right-click the VxRail cluster and select Deploy OVF Template to launch the wizard.
4. From Select an OVF template, select Local file and then click UPLOAD FILES.
146
Restore the VMware vCenter Server from a file-based backup
5. Select the VMware vCenter Server OVA file and click NEXT.
6. Enter a VM name and click NEXT.
Restore the VMware vCenter Server from a file-based backup
147
7. Select the node where the VMware vCenter Server is installed and then click NEXT.
8. Verify that all details are correct. Ignore certificate warnings and click NEXT.
148
Restore the VMware vCenter Server from a file-based backup
9. Accept all license agreements and click NEXT.
10. Select the appropriate configuration for the VMware vCenter Server environment and then click NEXT.
11. Select the VxRail vSAN Datastore storage and then click NEXT.
Restore the VMware vCenter Server from a file-based backup
149
12. Select the VMware vCenter Server Network as the Destination Network.
13. Enter the following network configurations in Customize template based on the network requirements of the end user:
150
Restore the VMware vCenter Server from a file-based backup
14. Verify that the setup details are correct and then click FINISH.
15. Locate the host from the VMware vCenter Server Appliance window.
16. Log in to the VMware ESXi host that the initial VMware vCenter Server is running on and then click Shut down.
Restore the VMware vCenter Server from a file-based backup
151
17. Access the new VMware vCenter ESXi and then click Power on.
18. Launch the VMware vCenter console and verify the network configurations.
NOTE: If the configuration information fails to deploy successfully, reconfigure it in the VMware vCenter Server
console.
Verify the VMware vCenter IP Address, Subnet Mask, and Default Gateway are correct. If not, update them.
152
Restore the VMware vCenter Server from a file-based backup
Verify that the DNS configuration is correct. If incorrect, update the correct DNS and hostname.
Restore the VMware vCenter Server from a file-based backup
153
Save the changes and exit from the VMware vCenter Server after you modify the IP address or DNS configurations.
154
Restore the VMware vCenter Server from a file-based backup
19. Go to the newly deployed VMware vCenter Server at http://<FQDN>:5480 and click Restore.
Restore the VMware vCenter Server from a file-based backup
155
20. Log in as root to the VMware vCenter Server Appliance.
21. Enter the backup file server Location, Username, and Password.
a. Enter the encrypted password for the backup file, if the backup file is encrypted.
b. Enter backup server path/backup_vc_vxm_timestamp/vCenter/sn_hostname/
M_vCenter_version_backup_time as the VMware vCenter backup path.
22. Review the information and click Finish and OK to the warning message that displays.
23. To ensure a successful VMware vCenter Server restore, wait until the restore process is complete and click CLOSE.
24. To update VMware vCenter Server information in the VxRail database, perform the following:
a. Open a browser and log in to the VMware vCenter MOB.
156
Restore the VMware vCenter Server from a file-based backup
b. Click content.
c. Click rootFolder and select the data center.
d. Click datacenter.
Restore the VMware vCenter Server from a file-based backup
157
e. Find the hostFolder and click host folder.
f. Click childEntity and select the VxRail vSAN cluster.
g. Locate the hosts.
158
Restore the VMware vCenter Server from a file-based backup
h. Locate VMware vCenter Server in one host and click VMware vCenter Server Appliance.
i.
Click summary.
j.
Click config.
Restore the VMware vCenter Server from a file-based backup
159
k. Record the VM name and the UID.
l.
Use SSH to log in to VxRail, and then enter:
psql -U postgres
vxrail -c "Update system.system_vm set uuid='[uuid]',
moref_id='[vm]' where server_type='VCENTER';"
ex. psql -U postgres vxrail -c "Update system.system_vm set
uuid='564d8002-6cbb-3e6d-0f39-72d41a01d5a4', moref_id='vm-2022'
where server_type='VCENTER';"
160
Restore the VMware vCenter Server from a file-based backup
m. Log in to the VMware vCenter Server and verify that VxRail is connected.
Restore the VMware vCenter Server from a file-based backup
161
12
VxRail Manager file-based backup
Use a backup script on the VxRail Manager VM to archive the VxRail Manager configuration files, database tables, and optionally
the logs. Run the script manually or set a schedule for automatic backups. Backups are stored in a folder on the VxRail primary
datastore. To restore the VxRail Manager, apply the backup to restore the configuration files and database tables onto a newly
deployed VxRail Manager VM.
There are two scripts that are named vxm_backup_restore.py and vxm_backup_restore_limited_bandwidth.py.
These two scripts are identical except that the latter is designed for limited Internet bandwidth. For instance, there is a
two-node VxRail cluster at a remote office branch office (ROBO) site with T1 lines where the backup process takes a longer
time to complete. The traffic to and from the cluster may be impacted. The vxm_backup_restore.py script uses the
VMware vCenter Server as pass-through. The vxm_backup_restore_limited_bandwidth.py script directly accesses
the primary datastore on the host for both upload and download operations.
If you use the vxm_backup_restore_limited_bandwidth.py script, use this task to replace the script name.
See KB 203882 for instructions to run a sed command before running the
vxm_backup_restore_limited_bandwidth.py script. If you have a dynamic node cluster and the primary storage type
is vSAN HCI mesh, the primary storage must be provisioned first. See KB 185917.
This procedure applies to the VxRail cluster running the VxRail version 8.0.x and later. See the VxRail 8.0 Support Matrix for a
list of supported versions.
This procedure is intended for Dell Technologies customers, employees, and partners who are authorized to work on a VxRail
cluster.
Back up the VxRail Manager manually
If you have a limited bandwidth environment, use the vxm_backup_restore_limited_bandwidth.py script.
About this task
CAUTION: You may not have access to some of the VxRail features during the backup process because the script
restarts services. Wait for a few minutes until the backup finishes and the services are ready to use.
Steps
1. To access the VxRail Manager bash shell, log in to the VMware vSphere Web Client as administrator and perform the
following:
a. From the Inventory icon, select the VxRail Manager VM.
b. On the Summary tab, click LAUNCH REMOTE CONSOLE.
c. Log in to the VxRail Manager as root or log in to the VxRail Manager VM as mystic and su to root.
2. You can create a backup with or without VxRail Manager logs. Select one of the following:
● To create a backup without the VxRail Manager logs, enter:
cd /mystic/vxm_backup_restore/
python vxm_backup_restore.py -b
NOTE: If your environment has limited bandwidth, you can use the
vxm_backup_restore_limited_bandwidth.py script.
● To create a backup that includes VxRail Manager logs, enter:
cd /mystic/vxm_backup_restore/
python vxm_backup_restore.py -b --keeplog
162
VxRail Manager file-based backup
NOTE: You may not be able to access some of the VxRail features during the backup process because the script
includes restarting the services. Wait 2 to 3 minutes until the backup finishes and the services are ready to be used.
3. To verify that the backup is complete and to list the backup copies, enter:
cd /mystic/vxm_backup_restore/
python vxm_backup_restore.py -l
4. (OPTIONAL) To list the existing services, enter:
cd /mystic/vxm_backup_restore/
python vxm_backup_restore.py -d
5. The following step is only required following a first-run and following an upgrade. After the first backup, back
up the recoveryBundle.zip to the primary data store manually. For the upgraded VxRail, replace the old
recoveryBundle.zip with the new one.
a.
b.
c.
d.
e.
Log in to the VMware vSphere Web Client as an administrator.
Select a host and click the Configure tab.
Select System > Services.
Select SSH and click START.
Select ESXi Shell and click START.
6. To back up the recoveryBundle.zip, SSH in to the VxRail Manager VM, log in as mystic and su to root.
a. For the VMware vSAN cluster, enter:
#scp /data/store2/recovery/recoveryBundle.zip
root@[hostIP]:/vmfs/volumes/VxRail-Virtual-SAN-Datastore-******/VxRail_backup_folder/
If lockdown mode is enabled, enter:
#scp /data/store2/recovery/recoveryBundle.zip
vxrailmanagement@[hostIP]:/vmfs/volumes/VxRail-Virtual-SAN-Datastore-******/
VxRail_backup_folder/
b. For the dynamic node cluster, enter:
#scp /data/store2/recovery/recoveryBundle.zip
root@[hostIP]:/vmfs/volumes/<primary storage name>/VxRail_backup_folder_******/
If lockdown mode is enabled, enter:
#scp /data/store2/recovery/recoveryBundle.zip
vxrailmanagement@[hostIP]:/vmfs/volumes/<primary storage name>/
VxRail_backup_folder_******/
Back up VxRail Manager
Back up VxRail Manager from the cluster.
Steps
1. From the VMware vSphere Web Client, select the Inventory icon.
2. Select the VxRail cluster and click the Configure tab.
3. Under VxRail Integrated Backup, select the STATUS tab.
4. Click CREATE BACKUP.
VxRail Manager file-based backup
163
Configure automatic backup for the VxRail Manager
VxRail Manager does the backup according to the backup policy defined.
Prerequisites
Before you schedule the backup, manually back up the recoveryBundle.zip file to the primary data store. This step is only
required following a first-run and an upgrade. For the upgraded VxRail, replace the old recoveryBundle.zip file with the
new one.
1. To
a.
b.
c.
d.
e.
2. To
back up the recoveryBundle.zip to the primary data store manually, perform the following:
Log in to the VMware vSphere Web Client as an administrator.
Select a host and click the Configure tab.
Select System > Services.
Select SSH and click START.
Select ESXi Shell and click START.
back up the recoveryBundle.zip to the primary data store, enter:
SSH root@<host_ipaddr>
If lockdown mode is enabled, enter:
SSH vxrailmanagement@<host_ipaddr>
a. For the VMware vSAN cluster, enter:
# mkdir /vmfs/volumes/VxRail-Virtual-SAN-Datastore-******/VxRail_backup_folder/
b. For the dynamic node cluster, enter:
# mkdir /vmfs/volumes/<primary storage name>/VxRail_backup_folder_*****/
3. Use SSH to log in to the VxRail Manager VM as mystic and su to root.
a. For the VMware vSAN cluster, enter:
#scp /data/store2/recovery/recoveryBundle.zip
root@<host_ipaddr>:/vmfs/volumes/VxRail-Virtual-SAN-Datastore-******/
VxRail_backup_folder/
If lockdown mode is enabled, enter:
#scp /data/store2/recovery/recoveryBundle.zip
vxrailmanagement@<host_ipaddr>:/vmfs/volumes/VxRail-Virtual-SAN-Datastore-******/
VxRail_backup_folder/
b. For the dynamic node cluster, enter:
#scp /data/store2/recovery/recoveryBundle.zip
root@<host_ipaddr>:/vmfs/volumes/<primary storage name>/VxRail_backup_folder_******/
If lockdown mode is enabled, enter:
#scp /data/store2/recovery/recoveryBundle.zip
vxrailmanagement@<host_ipaddr>:/vmfs/volumes/<primary storage name>/
VxRail_backup_folder_******/
Next steps
To stop the automatic backup, enter:
cd /mystic/vxm_backup_restore/
python vxm_backup_restore.py -c --period manual
164
VxRail Manager file-based backup
Manage the backup policy
The backup script leverages the VxRail Manager operating system time to run the periodical backup job. If the VxRail Manager is
not in your time zone, adjust the backup time to match the VxRail Manager time zone.
Prerequisites
To determine the VxRail Manager operating system time zone, enter: date
About this task
To set the backup policy using the command-line, see the following table that lists for command-line options:
Command-line
Description
--period
manual, daily, weekly, monthly
--hour
The hour to run the script
--minute
The minute to run the script
--rotation
The number of backups to maintain
--keeplog
VxRail Manager logs are in the backup.
Steps
1. To configure a backup policy that occurs every day at 1:15 a.m with eight backup copies include the VxRail Manager logs,
enter:
cd /mystic/vxm_backup_restore/
python vxm_backup_restore.py -c --period daily --hour 1 --minute 15 --rotation 8 -keeplog
Follow the same method to configure a monthly or weekly backup policy.
2. To set or change the backup policy using the wizard, enter:
cd /mystic/vxm_backup_restore/
python vxm_backup_restore.py -c
Current user is root. We can do current job.
Download vxmbackup.json from datastore.
Connecting to vCenter [vc.app24.local]
domain-c27
data center path is /VxRail
download file http_url:
https://vc.app24.local/folder/VxRail_backup_folder/vxmbackup.json?
dcPath=%2FVxRail&dsName=VxRail-Virtual-SAN-Datastore-35c360a9-23e0-4df3bb8e-55447a6bbab1
rotation_type: [daily], do you want to change it? [y|n]y
rotation_type: [daily]
Choose your rotation:
1) manual
2) daily
3) weekly
4) monthly
Choose rotation type. choose [1-4]:2
Current schedule time is 15:15. Change it? [y|n]y
Set hour with value [0-23], current is [15]:1
Set minute with value [0-59], current is [15]:15
Current rotation number is 8. Do you want to change it? [y|n]y
Set rotation number with value [7-24], current is [7]:8
Current keeplog flag is 1. Do you want to change it? [y|n]y
Set keep_log with value [0 - no log, 1 - keep log, current is [1]:1
---------update crontab---------
VxRail Manager file-based backup
165
15 1 * * * root /usr/bin/logger 'VxM rotation backup start.' && python /mystic/
vxm_backup_restore/vxm_backup_restore.py -b --keeplog
cronjob is updated.
Data center path is /VxRail
download file http_url: https://vc.app24.local/folder/VxRail_backup_folder/
vxmbackup.json?dcPath=%2FVxRail&dsName=VxRail-Virtual-SANDatastore-35c360a9-23e0-4df3-bb8e-55447a6bbab1
schedule config is updated and uploaded to datastore
{"rotation":[],"backup_policy":
{"rotation_type":"daily","week_day":"0","month_day":"1","backup_time_hour":"1","backup
_time_minute":"15","backup_file_limit":"8","keep_log":"1"}}
[Schedule config job END]
This example is for the VMware vSAN cluster. If you are using a dynamic node cluster, the process is similar, but the target
backup folder is different.
166
VxRail Manager file-based backup
13
VxRail Manager file-based restore
The file-based restore uses internal or external DNS to recover VxRail Manager during an unrecoverable failure.
Prerequisites
Create a file-based backup to reference.
1. Download the same version of the VxRail Manager Package for Restore OVA from Dell Support to a storage
device that is accessible by the VMware vCenter Server.
2. Deploy the VxRail Manager OVA using a compatible browser with the VMware vSphere vClient Integration plug-in enabled.
Chrome with the extended support release feature is compatible. The stand-alone VMware vSphere vClient is not supported
to deploy the VxRail Manager virtual appliance from the OVA file. See KB 2130672 and KB 2125623 for more details.
3. Obtain the VMware vCenter Server FQDN and IP address that is managing your VxRail Manager.
4. Obtain the administrator or VxRail VMware vCenter management user credentials.
5. Obtain the VxRail cluster data center name and cluster name.
6. Obtain a different IP address on the same subnet as that of the original VxRail Manager. The new IP address is temporary
and cannot have an entry in the DNS server.
7. If you have a dynamic node cluster and the primary storage type is VMware vSAN HCI mesh, you must first provision primary
storage. See KB 185917 for more details.
If Secure Boot enabled on VxRail Manager, after file-based restore procedure is complete, Secure Boot is disabled. To enable
Secure Boot again, see KB 199797 after the restore process completes.
About this task
Use a backup script on the VxRail Manager VM to archive the VxRail Manager configuration files, database tables, and optionally
the logs. Run the script manually or set a schedule for automatic backups. Backups are stored in a folder on the VxRail primary
datastore. To restore the VxRail Manager, apply the backup configuration to restore the configuration files and database tables
to a newly deployed VxRail Manager VM.
This procedure is intended for Dell Technologies employees and partners who are authorized to work on a VxRail cluster. This
procedure is not available to customers.
This procedure applies to the VxRail cluster running the VxRail version 7.0.450 and VxRail version 8.0.x and later.
Restore the VxRail Manager using external DNS
Deploy the VxRail Manager OVA through the VMware vSphere Web Client onto the VxRail primary datastore.
Steps
1. Log in to the VMware vSphere Web Client as administrator.
a. Right-click the VxRail cluster and select Deploy OVF Template.
b. Select the OVA file that you downloaded from the Dell Support site.
c. From the Deploy OVF Template left-menu, click Select a name and folder. Enter a unique name for the VxRail
Manager and deploy the VxRail Manager VM in a VM folder. Click NEXT.
d. Click Select a compute resource and click NEXT.
e. Click Review details and click NEXT.
f. Click Select storage and assign the VM to the storage policy VXRAIL-SYSTEM-STORAGE-PROFILE for the VMware
vSAN cluster. For dynamic node clusters, assign the VM to the storage policy Datastore Default. If the primary storage
type is VMFS on FC or VMware vSAN-compliant policy that the user is provisioning on the primary storage, the primary
storage type is vSAN HCI MESH. Click NEXT.
g. Click Select networks to select a destination network for each source network and perform the following:
● As the Destination Network for Network 1, select the vCenter Server Network-<uuid> port group from the
drop-down.
VxRail Manager file-based restore
167
● As the Destination Network for Network 2, select the VxRail Management-<uuid> port group from the dropdown.
h. Click NEXT > FINISH.
2. Before powering on the new VxRail Manager, to change the Guest OS Family and Guest OS Version perform the
following:
a.
b.
c.
d.
e.
Log in to the VMware vSphere Web Client as an administrator.
Right-click the VxRail Manager VM and click Edit Settings.
Under the VM Options menu, click General Options.
From the Guest OS Family drop-down, select Linux.
From the Guest OS Version drop-down, select Other Linux (64-bit).
3. To disable the vApp option, perform the following:
a.
b.
c.
d.
Log in to the VMware vSphere Web Client as an administrator.
Select the VxRail Manager VM.
From the Configure tab, select Settings and click EDIT to view the Enable vApps Options checkbox.
Clear Enable vApps Options checkbox and click OK.
4. Right-click the newly deployed VxRail Manager and select Power > Power On.
5. Open the VxRail Manager VM console from the VMware vSphere Web Client.
6. Log in to the VxRail Manager as root.
NOTE: The default VxRail Manager root password is Passw0rd!
7. To configure the new VxRail Manager network, enter:
# cd /mystic/vxm_backup_restore/
# python vxm_backup_restore.py -n
When prompted, provide the IPv4 address, netmask address, and the gateway address for the new VxRail Manager.
8. (OPTIONAL) If you use the vxm_backup_restore_limited_bandwidth.py script or perform the backup from the
UI, copy the following files from the original VxRail Manager to the new VxRail Manager to enable DNS. You can locate the
script on the VxRail Manager when it is deployed to perform the restore.
/etc/hosts is used for internal DNS server.
/etc/resolv.conf is used for external DNS server.
Using SSH, log in to the new VxRail Manager VM as mystic.
To switch to root, enter:
su root
scp root@<original_vxrail_manager_ipaddress>:/etc/hosts /etc/
168
VxRail Manager file-based restore
scp root@<original_vxrail_manager_ipaddress>:/etc/resolv.conf /etc/
9. (OPTIONAL) Download VxRail Manager backup file from the SFTP server to the VMware vSAN storage.
VxRail 7.0.450 or later cluster supports backing up VxRail Manager to both the local VMware vSAN storage and a remote
SFTP server. If the vSAN backup file fails or VxRail Manager is missing, you can restore VxRail Manager from the remote
SFTP server.
a. Connect to the remote SFTP server where the backup file was stored and go to the path where it was originally saved.
b. Use an SFTP client tool (such as WinSCP or FileZilla) to download the backup file onto your own system.
● If the downloaded file VxRailArchive has the file extension .enc, the file is encrypted and must be decrypted using the
openssl command. Go to step c.
● If the file extension of VxRailArchive is .tgz, the backup file was generated without encryption. You do not need
decrypt it and you can skip this step. (Windows install OpenSSL)
c. Go to Install OpenSSL to install OpenSSL.
d. cd to the file location.
openssl aes-256-cbc -d --pbkdf2 -iter 100000 -in <filename.tgz.enc> -pass
pass:<decrypt password> -out <filename.tgz>
For example:
openssl aes-s56-cbc -d --pbkdf2 -iter 100000 -in
VxRailArchive_2023032301582_27981418.tgz.enc -pass pass:ABCabc123! -out
VxRailArchive_2023032301582_27981418.tgz
e. To upload VxRailArchive_20230323015842_27981418.tgz from the VMware vSphere Web Client, select theVxRailVirtual-SAN-Datastore > VxRail_backup_folder > UPLOAD FILES.
f. Click UPLOAD.
g. Select the VxRail Manager backup tgz file to upload.
VxRail Manager file-based restore
169
10. To start the restore wizard, enter:
cd /mystic/vxm_backup_restore/
python vxm_backup_restore.py -r --vcenter <VMware vCenter Server ip_address>
Verify that the original VxRail Manager is powered off before the new VxRail Manager system reboot. VxRail Manager
restarts services in approximately five minutes.
11.
12. To restore the recoveryBundle.zip from the primary datastore to the new VxRail Manager, perform the following:
a. Log in to the VMware vCenter server using the VMware vSphere Web Client.
b. Open the SSH service of any of the hosts.
Select Host > Configure > System > Services.
Select SSH > Start.
Select ESXi Shell > Start.
c. Restore the recoveryBundle.zip to the VxRail Manager:
Using SSH, log in to the new VxRail Manager VM as mystic.
To switch to root, enter:
170
VxRail Manager file-based restore
su root
mkdir /data/store2/recovery
cd /data/store2/recovery
For the VMware vSAN cluster, enter:
scp root@[Host-IP]:/vmfs/volumes/VxRail-Virtual-SAN****/VxRail_backup_folder/
recoveryBundle.zip ./
If the lockdown mode is enabled, enter:
scp vxrailmanagement@[HostIP]:/vmfs/volumes/VxRail-Virtual-SAN****/
VxRail_backup_folder/recoveryBundle.zip ./
For the dynamic node cluster, enter:
scp root@[Host-IP]:/vmfs/volumes/<cluster_primary_storage_name>/
VxRail_backup_folder_******/recoveryBundle.zip ./
If the lockdown mode is enabled, enter:
scp vxrailmanagement@[HostIP]:/vmfs/volumes/<cluster_primary_storage_name>/
VxRail_backup_folder_******/recoveryBundle.zip ./
d. Verify the file ownership and permissions.
Verify that the ownership of the folder/data/store2/recovery is tcserver:pivotal. Otherwise, correct the
ownership.
Example:
chown tcserver:pivotal /data/store2/recovery
Ensure that the permission of the folder /data/store2/recovery/slim is drwxrwx---. Correct the ownership if
needed.
Example:
chmod 770 /data/store2/recovery/slim
Create the /data/store2/recovery/slim folder if it does not exist. Ensure that the ownership of folder /data/
store2/recovery/slim is root:docker. Correct the ownership if needed.
For example: chown -R root:docker /data/store2/recovery/slim
If /data/store2/recovery/slim exists on the previous VxRail Manager, after SCP, the slim folder may exist.
Restore the VxRail Manager using internal DNS
About this task
Use the following link to install OpenSSL: Install OpenSSL
Steps
1. To deploy the VxRail Manager OVA through the VMware ESXi Web Client onto the VxRail primary datastore, perform the
following:
a. Log in to the VMware ESXi host client as root.
b. Create the VMware virtual standard switch and port group on the VMware ESXi.
c. To enable SSH on the VMware ESXi Host Client, select Manager and select the Services tab. Click Enable SSH
service.
d. SSH to the VMware ESXi host to remove one PNIC from the VMware VDS, enter:
ssh root@<host_ipaddr>
VxRail Manager file-based restore
171
If lockdown mode is enabled, enter:
ssh vxrailmanagement@<host_ipaddr>
To list the existing vSwitch configuration and to check the VMNIC1 port, enter:
esxcfg-vswitch -l // check for vmnic1 port (example: 13)
esxcfg-vswitch -Q vmnic1 -V 13 "VMware HCIA Distributed Switch" // move vmnic1 out of VMware
VDS. Ensure that there is at least one other PNIC for portgroup using vmnic1
NOTE: VMNIC1 is used in the example and the NIC is from the management network. In some VxRail models, only
VMNIC2 and VMNIC3 are used for the management network in the VMware HCIA Distributed Switch. You can use
VMNIC3 instead of VMNIC1.
e. To create a VMware standard switch to use this PNIC, select Networking. Under the Virtual switches tab, click Add
standard virtual switch.
f. To create a port group, select Networking. Under the Port groups tab, click Add port group.
g. Power off or delete the old VxRail Manager on the VMware ESXi Web Client.
h. Deploy a new VxRail Manager by OVF using the new port group.
i. Log in to the VxRail Manager and change the VxRail Manager IP address to the old one.
For IPv4, enter:
/opt/vmware/share/vami/vami_set_network eth0 STATICV4 <vxrail_manager_ipaddress>
<vxrail_manager_netmask> <vxrail_manager_gateway>
For IPv6, enter:
/opt/vmware/share/vami/vami_set_network eth0 STATICV4+STATICV6 <vxrail_manager_ipv4>
<vxrail_manager_netmask> <vxrail_manager_gateway> <vxrail_manager_ipv6>
<prefix_length> <vxrail_manager_ipv6_gateway>
2. To restore the backup data, perform the following:
a. Log in to the VMware ESXi Host Client as root.
b. From the VMware ESXi, to download the backup file from the primary datastore, right-click the storage and select
Datastore browser and click Download.
c. Extract the file and upload the /etc/hosts file to a new VxRail Manager.
d. Restart the dnsmasq service.
e. Generate or go to Restore VxRail Manager to run the vxm_backup_restore.py script and restore the
recoveryBundle.zip from the primary datastore to the VxRail Manager.
3. To move the new VxRail Manager to the VMware vCenter Server network and set the VxRail Manager port group as the
original VxRail Manager, perform the following:
a.
b.
c.
d.
e.
f.
g.
h.
172
From the VMware vSphere Web Client, log in to VMware vCenter Server as an administrator.
Right-click the new VxRail Manager and click Edit Settings.
Log in to the VMware ESXi Host Client to delete the VMware virtual switch created in step 1.
From the VMware vSphere Web Client, log in to VMware vCenter Server as an administrator. .
From the Network tab, click VMware VDS.
Select ACTIONS > Add and Manage Host to assign the PNIC.
Click Assign uplink to assign the removed PNIC to the VMware VDS. Click NEXT.
Delete the port group and the old VxRail manager VM that is created on the VMware ESXi. If the VM port group in the
VMware VDS is ephemeral, skip the Steps 1 and Step 3.
VxRail Manager file-based restore
Download