Uploaded by garbaddas garbad

flex rack admin guide

advertisement
Dell EMC PowerFlex Rack
Administration Guide
November 2021
Rev. 11.0
Notes, cautions, and warnings
NOTE: A NOTE indicates important information that helps you make better use of your product.
CAUTION: A CAUTION indicates either potential damage to hardware or loss of data and tells you how to avoid
the problem.
WARNING: A WARNING indicates a potential for property damage, personal injury, or death.
© 2016 - 2021 Dell Inc. or its subsidiaries. All rights reserved. Dell, EMC, and other trademarks are trademarks of Dell Inc. or its subsidiaries.
Other trademarks may be trademarks of their respective owners.
Contents
Revision history.......................................................................................................................................................................... 7
Chapter 1: Introduction................................................................................................................10
PowerFlex rack deployment options ............................................................................................................................ 10
LACP bonding NIC port design VLAN names.............................................................................................................. 11
Using an embedded operating system-based jump server...................................................................................... 12
Install the embedded operating system-based iDRAC tools..............................................................................13
Copy files using SCP................................................................................................................................................... 13
Chapter 2: Monitoring system health........................................................................................... 15
Monitoring system resources..........................................................................................................................................15
Managing compliance........................................................................................................................................................15
Chapter 3: Configuring and viewing alerts....................................................................................17
Configuring SNMP trap and syslog forwarding.......................................................................................................... 17
Configure SNMP trap forwarding............................................................................................................................ 18
Configure syslog forwarding......................................................................................................................................19
Edit the alert connector settings.................................................................................................................................. 20
View alerts........................................................................................................................................................................... 21
How alerts are acknowledged................................................................................................................................... 21
Alert messages and recommended actions............................................................................................................21
PowerFlex alert messages...............................................................................................................................................22
Chapter 4: Managing components with PowerFlex Manager........................................................ 23
PowerFlex Manager limits............................................................................................................................................... 24
Managing external changes............................................................................................................................................ 24
Performing maintenance activities in a PowerFlex cluster..................................................................................... 25
Entering protected maintenance mode..................................................................................................................27
Exiting protected maintenance mode..................................................................................................................... 27
Data assurance during maintenance....................................................................................................................... 27
Redistribute the MDM cluster using PowerFlex Manager...................................................................................... 28
Remove a service.............................................................................................................................................................. 29
Chapter 5: Administering the network......................................................................................... 31
Add a network to a service .............................................................................................................................................31
Administering the network outside of PowerFlex Manager....................................................................................32
Optimizing the network............................................................................................................................................. 32
Monitoring the network............................................................................................................................................. 34
Adding a VLAN to the network................................................................................................................................ 37
Creating a network interface card team for Windows Server 2016 or 2019................................................ 60
Chapter 6: Administering the storage..........................................................................................62
Considerations................................................................................................................................................................... 62
PowerFlex management controller datastore and virtual machine details..........................................................62
Viewing PowerFlex Gateway details.............................................................................................................................63
Contents
3
Administering the CloudLink Center............................................................................................................................. 63
Adding and managing CloudLink Center licenses................................................................................................ 63
Configure custom syslog message format............................................................................................................ 64
Manage a self-encrypting drive (SED) from CloudLink Center....................................................................... 65
Manage a self-encrypting drive from the command line................................................................................... 65
Release a self-encrypting drive................................................................................................................................66
Release management of a self-encrypting drive from the command line..................................................... 67
Configure the PowerFlex compute-only nodes outside of PowerFlex Manager............................................... 67
Administering the storage outside of PowerFlex Manager.....................................................................................68
Configuring DNS..........................................................................................................................................................68
Configure PowerFlex storage-only nodes and PowerFlex hyperconverged nodes with static route
for SDC reachability................................................................................................................................................68
Retrieving PowerFlex performance metrics......................................................................................................... 68
Verifying VMware vSphere host settings..............................................................................................................70
Configuring compression on PowerFlex storage-only nodes with NVDIMMs.............................................. 70
Verify the VMware ESXi host recognizes the NVDIMM....................................................................................78
Migrate vCLS VMs for PowerFlex hyperconverged nodes and PowerFlex compute-only nodes............81
Set V-Tree compression mode for PowerFlex GUI..............................................................................................81
Defining rebuild and rebalance settings for PowerFlex...................................................................................... 81
Adding a PowerFlex storage-only node to PowerFlex....................................................................................... 83
Adding a PowerFlex storage-only node with an NVMe disk to PowerFlex................................................... 84
Configuring replication on PowerFlex storage-only nodes................................................................................88
Remote asynchronous replication on PowerFlex hyperconverged nodes..................................................... 93
Determining and switching the PowerFlex Metadata Manager...................................................................... 101
Rebooting PowerFlex nodes................................................................................................................................... 103
Entering protected maintenance mode using the PowerFlex GUI presentation server........................... 104
Exiting protected maintenance mode using the PowerFlex GUI presentation server.............................. 104
Adding and removing a PowerFlex device for server maintenance.............................................................. 105
Unmapping and mapping a volume........................................................................................................................ 107
Install and configure a Windows-based compute-only node to PowerFlex.................................................109
Updating a PowerFlex device path........................................................................................................................ 110
Disabling the PowerFlex RAM read cache............................................................................................................110
Using Trusted Platform Module...............................................................................................................................111
Configuring PowerFlex for Secure Remote Services........................................................................................ 112
Enabling and disabling SDC authentication.......................................................................................................... 116
Disabling persistent checksum on medium granularity storage pools...........................................................120
Chapter 7: Backing up and restoring.......................................................................................... 122
Backing up and restoring using PowerFlex Manager.............................................................................................. 122
Backing up and restoring CloudLink Center..............................................................................................................122
Viewing backup information.................................................................................................................................... 122
Changing the schedule for automatic backups.................................................................................................. 123
Generating a backup file manually......................................................................................................................... 123
Generating a backup key pair..................................................................................................................................123
Downloading the current backup file.................................................................................................................... 124
Restore the CloudLink backup................................................................................................................................124
Using permanent device loss........................................................................................................................................ 125
Using permanent device loss with VMware vSphere........................................................................................125
Enabling and disabling permanent device loss in VMware vSphere.............................................................. 125
4
Contents
Chapter 8: Powering on and off..................................................................................................127
Power on a Technology Extension with Isilon Storage...........................................................................................127
Power on a PowerFlex rack...........................................................................................................................................127
Power on the PowerFlex management controller 2.0............................................................................................ 128
Power on the PowerFlex management controller 1.0............................................................................................. 129
Power on the VMware NSX-T Edge nodes ............................................................................................................. 130
Power on PowerFlex storage-only nodes.................................................................................................................. 130
Power on all PowerFlex hyperconverged nodes....................................................................................................... 131
Power on all PowerFlex compute-only nodes........................................................................................................... 131
Complete the powering on of PowerFlex rack......................................................................................................... 132
Power off a PowerFlex rack..........................................................................................................................................132
Power off protection domains using PowerFlex GUI.............................................................................................. 133
Power off protection domains using a PowerFlex version prior to 3.5.............................................................. 133
Power off PowerFlex compute-only nodes with VMware ESXi...........................................................................133
Power off PowerFlex hyperconverged nodes with VMware ESXi...................................................................... 134
Power off PowerFlex compute-only nodes with Windows Server 2016 or 2019............................................. 134
Power off the VMware NSX-T Edge nodes..............................................................................................................134
Power off the PowerFlex management controller 2.0........................................................................................... 134
Power off the PowerFlex management controller 1.0............................................................................................ 135
Complete the powering off of PowerFlex rack........................................................................................................ 136
Power off a Technology Extension with Isilon Storage..........................................................................................136
Chapter 9: PowerFlex rack password management.....................................................................138
Updating passwords for system components.......................................................................................................... 138
Updating passwords for nodes............................................................................................................................... 138
Updating passwords for PowerFlex Gateway components ........................................................................... 139
Update a credential in PowerFlex Manager.............................................................................................................. 140
Updating passwords outside of PowerFlex Manager............................................................................................. 140
Compute.......................................................................................................................................................................140
Storage..........................................................................................................................................................................141
Virtualization............................................................................................................................................................... 145
Changing CloudLink passwords..............................................................................................................................148
Changing the PowerFlex management controller password...........................................................................149
Managing embedded operating system users and password..........................................................................149
Managing SUSE users and passwords.................................................................................................................. 151
Managing Red Hat Enterprise Linux users and passwords..............................................................................152
Changing the IPI appliance password................................................................................................................... 153
Changing a user account password on the IPI appliance.................................................................................153
Chapter 10: Installing and configuring PowerFlex Manager........................................................ 154
Installation prerequisites................................................................................................................................................ 154
Configuring the operating system installation (PXE) network............................................................................. 154
Configure the switches............................................................................................................................................ 155
Configure the operating system installation (PXE) port group on the controller cluster........................156
Configure the operating system installation (PXE) port group on the customer cluster........................ 157
Deploy PowerFlex Manager...........................................................................................................................................157
Setting up PowerFlex Manager....................................................................................................................................158
Changing the Dell Admin password.......................................................................................................................158
Contents
5
Configuring the networks........................................................................................................................................ 158
Configure the date and time................................................................................................................................... 160
Change the hostname.............................................................................................................................................. 160
Enable remote access to the PowerFlex Manager VM.......................................................................................... 160
Configuring PowerFlex Manager.................................................................................................................................. 161
Discover resources.................................................................................................................................................... 163
Discover an existing cluster.................................................................................................................................... 164
Configure the operating system installation (PXE) network in PowerFlex Manager................................165
Clone a template........................................................................................................................................................ 165
Configuring network settings in PowerFlex Manager templates................................................................... 168
Publish a template...................................................................................................................................................... 171
Add a new compatibility management file........................................................................................................... 172
Deploying the PowerFlex GUI presentation server............................................................................................172
Deploy the PowerFlex Gateway............................................................................................................................. 173
Deploy the CloudLink Center.................................................................................................................................. 174
Deploy a service......................................................................................................................................................... 176
Add volumes to a service......................................................................................................................................... 177
Resize a volume..........................................................................................................................................................178
Migrate vCLS VMs for PowerFlex hyperconverged nodes and PowerFlex compute-only nodes......... 179
Provide a PowerFlex Manager license key after initial setup..........................................................................179
Configure the alert connector................................................................................................................................ 180
Configuring SNMP trap and syslog forwarding...................................................................................................181
Deploying Windows-based PowerFlex compute-only nodes with PowerFlex Manager (bare metal
deployment)............................................................................................................................................................ 183
Managing users.......................................................................................................................................................... 186
Minimum VMware vCenter permissions required to support PowerFlex Manager functionalities in
different modes......................................................................................................................................................195
6
Contents
Revision history
Date
Document revision
Description of changes
November 2021
11.0
Added support for:
● PowerFlex management controller 2.0, an R650-based
controller that uses PowerFlex storage and a VMware
ESXi hypervisor
● PowerFlex R650 hyperconverged, storage-only, and
compute-only nodes
● PowerFlex R750 hyperconverged, storage-only, and
compute-only nodes
● PowerFlex R6525 hyperconverged, storage-only, and
compute-only nodes
● PowerFlex Manager 3.8
● CloudLink 7.1
July 2021
10.0
Added support for:
● PowerFlex 3.6
● VMware vSphere 7 update 1
● Cisco Nexus switches:
○ N93180YC-FX access/management aggregation/leaf
switch
○ N9364C-GX spine/leaf switch
○ N92348GC-X management switch
● Multi-tenancy with L3 routing between SDS to SDC
communication
● Embedded operating system jump server support for
existing systems
● 25Gb rack network daughter card (rNDC)
● CloudIQ
March 2021
9.1
Updated content for discovering an existing cluster
December 2020
9.0
Added support for:
● Embedded operating system-based jump server
● VMware ESXi 7.0
Added content on configuring network settings in
PowerFlex Manager templates
September 2020
8.1
Editorial updates
August 2020
8.0
Added content for:
● Protected maintenance mode
● Native asynchronous replication
● LACP bonding NIC port design
Updated content for:
● CloudLink 6.9
● PowerFlex 3.5
Removed content for Red Hat Virtualization
April 2020
7.0
● Added content on PowerFlex Manager Users to the
Configuring PowerFlex Manager section.
● Added support for 100 GB leaf-spine networks.
● Updated content for CloudLink.
January 2020
6.0
● Added content for VxFlex Manager alerting, including
removal of OpenManage Enterprise dependency
Revision history
7
Date
Document revision
Description of changes
● Updated the PowerFlex ManagerInstallation
prerequisites topic with the resource requirements to
install PowerFlex Manager.
● Added content for PowerFlex compute-only nodes
based on Windows Server 2016.
● Added content for NVMe on PowerFlex storage-only
nodes and PowerFlex hyperconverged nodes
● Added content for NVDIMMs on PowerFlex
hyperconverged nodes and PowerFlex storage-only
nodes
● Added content for Dell EMC Networking switches.
● Added content for spine scale support for more than
500 nodes
● Added content for storage-only PowerFlex rack
● Updated the PowerFlex rack password management
section to include a topic on Updating passwords in
PowerFlex Manager.
● Updated the Changing the default admin MDM
password topic to include the command to change the
MDM password on the PowerFlex Gateway
● Added support for CloudLink
July 2019
5.0
Added content for:
● PowerFlex rack VLAN EVPN leaf-spine configuration
● NVDIMM
● Managing components with PowerFlex Manager
● Monitoring system health
● Configuring SNMP trap and syslog forwarding
● Installing PowerFlex Manager
● Configuring the OS installation (PXE) network
● Configuring PowerFlex Manager
● Enabling remote access to the PowerFlex Manager VM
Added section on PowerFlex rack password management
(previously located in the VxFlex Integrated Rack Security
Configuration Guide)
Updated Backing up and restoring using PowerFlex
Manager content
Updated Monitoring and alerting using Secure Remote
Services content
Updated Adding a PowerFlex storage-only node with a
NVMe disk to PowerFlex content
January 2019
4.0
●
●
●
●
August 2018
3.0
● Added PowerFlex Manager
● Major revisions for 14th generation of Dell EMC
PowerEdge servers
● Revisions for PowerFlex
April 2018
2.0
Major revisions and additions in compute, network, storage,
and management administrative tasks for 13th generation of
Dell EMC PowerEdge servers deployments
June 2017
1.3
Minor updates for GA release
8
Revision history
Added
Added
Added
Added
support for the two-layer deployment
support for Red Hat Virtualization
support for NVMe drives
information for Power On and Power Off
Date
Document revision
Description of changes
April 2017
1.2
Added Configuring system monitoring for the VxRack
server
October 2016
1.1
●
●
●
●
July 2016
1.0
Initial release
Updated management software information
Updated to support ScaleIO 2.0
Updated product name
Added Configuring the baseboard management
controller to send SNMP alerts
Revision history
9
1
Introduction
This guide provides procedures for administering PowerFlex rack.
It
●
●
●
●
●
●
●
provides the following information:
Monitoring system health
Configuring and viewing alerts, including configuring SNMP trap and syslog forwarding
Managing components with PowerFlex Manager
Administering the operating system, network, and storage
Backing up and restoring
Powering on and off
Managing PowerFlex rack component passwords
The target audience for this document includes system administrators responsible for managing PowerFlex rack, and Dell EMC
personnel responsible for remote management.
If you need to install PowerFlex Manager, see Installing and configuring PowerFlex Manager.
Dell EMC PowerFlex rack was previously known as Dell EMC VxFlex integrated rack. Similarly, Dell EMC PowerFlex Manager
was previously known as Dell EMC VxFlex Manager, and Dell EMC PowerFlex was previously known as Dell EMC VxFlex OS.
References in the documentation will be updated over time.
PowerFlex rack architecture is based on Dell EMC PowerEdge R650, R750, R6525, R640, R740xd, and R840 servers.
Dell EMC Networking S5200 series switches are now known as Dell EMC PowerSwitch S5200-ON series switches. Dell EMC
Networking S4100 series switches are now known as Dell EMC PowerSwitch S4100-ON series switches. References in the
documentation will be updated over time.
PowerFlex Manager provides the management and orchestration functionality for PowerFlex rack. References to PowerFlex
Manager in this document apply only if you have a licensed version of PowerFlex Manager. For more information, contact Dell
Technologies Sales. If your system uses Vision Intelligent Operations software, work with your Dell Technologies sales team to
obtain a PowerFlex Manager license.
Procedures in this book for using VMware vSphere apply to both VMware vSphere Client and VMware vSphere Web Client,
except where noted.
See the Glossary for terms, definitions, and acronyms.
PowerFlex rack deployment options
PowerFlex rack has several options for deployment.
PowerFlex runs on PowerFlex rack nodes to operate the management and customer storage and tie in workloads. PowerFlex has
the following components:
●
●
●
●
Storage Data Client (SDC): Consumes storage from the PowerFlex rack
Storage Data Server (SDS): Contributes node storage to PowerFlex rack
PowerFlex Metadata Manager (MDM): Manages the storage blocks and tracks data location across the system
Storage Data Replication (SDR): Enables replication on PowerFlex storage-only nodes
PowerFlex enables flexible deployment options by allowing the separation of SDC and SDS components. It addresses data
center workload requirements through the following PowerFlex rack deployment options:
Deployment type
Description
Full hyperconverged
Consists of PowerFlex hyperconverged nodes, which contribute both compute and
storage resources to the virtual environment.
Front-end (application) and back-end (storage) traffic share the same PowerFlex data
networks. This includes PowerFlex hyperconverged nodes and PowerFlex storage-only
nodes with NVMe.
10
Introduction
Deployment type
Description
NVDIMM data compression is supported on PowerFlex hyperconverged nodes and
PowerFlex storage-only nodes.
Two-layer
Separates compute resources from storage resources, allowing the independent
expansion of compute or storage resources.
Consists of PowerFlex compute-only nodes (supporting the SDC) and PowerFlex
storage-only nodes (connected to and managed by the SDS). PowerFlex computeonly nodes host end-user applications. PowerFlex storage-only nodes contribute
storage to the system pool.
Hybrid hyperconverged
Consists of PowerFlex hyperconverged nodes, PowerFlex compute-only nodes, and
PowerFlex storage-only nodes. Some PowerFlex nodes contribute both compute
resources and storage resources (PowerFlex hyperconverged nodes), some contribute
only compute resources (PowerFlex compute-only nodes), and some contribute only
storage resources (PowerFlex storage-only nodes).
Storage-only
Consists of nodes that contribute storage resources to the virtual environment.
The back-end traffic shares the same PowerFlex data networks. The storage-only
PowerFlex rack provide volumes to an external customer compute limited to Dell
EMC PowerEdge servers. Storage Data Replication (SDR) is installed to enable native
asynchronous replication on the PowerFlex storage-only nodes.
No SDC components are installed on these nodes.
Dual network
Consists of a solution integrated into your existing software-defined network (SDN),
such as Cisco ACI. Only PowerFlex hyperconverged nodes and PowerFlex computeonly nodes are affected and have two NICs cabled to a pair of customer access/leaf
switches. Aggregation switches by default are tied into the customer SDN border for
L2/L3 access.
The solution uses two new customer access switches, two additional network ports
on the host, and a new distributed virtual switch to carry traffic seamlessly into the
software-defined network.
Two-layer deployments allow rebooting of VMware cluster nodes without PowerFlex ramifications.
When designing initial deployment or specifying later growth, use PowerFlex hyperconverged nodes if both PowerFlex computeonly nodes and PowerFlex storage-only nodes are needed. You can add PowerFlex compute-only nodes or PowerFlex storageonly nodes as needed.
To control the number of processors or cores, consider separating the compute for the application from the PowerFlex nodes
that support storage. This deployment is a pure two-layer deployment. Extra workloads are supported or added on:
● PowerFlex hyperconverged nodes
● Two-layer PowerFlex compute-only nodes
● PowerFlex storage-only nodes
This creates a hybrid deployment.
PowerFlex rack is configured with a single VMware vCenter, consisting of separate datacenters for controller and customer
nodes. To access controller nodes, use the PowerFlex management controller. To access customer nodes, use the PowerFlex
operating system for the customer cluster.
LACP bonding NIC port design VLAN names
Depending on when the system was built, your VLAN names might differ from what is in the documentation. The following table
lists the VLAN names used with the current LACP bonding NIC port design along with the former VLAN names.
NOTE: VLANS pfmc-<name>-<name>-<vlanid> are only required for PowerFlex management controller 2.0.
VLANS flex-vmotion-<vlanid> and flex-vsan-<vlanid> are only required for PowerFlex management controller 1.0.
Introduction
11
LACP bonding NIC port design VLAN name
Former name
flex-oob-mgmt-<vlanid>
con-mgmt-<vlanid>
flex-vcsa-ha-<vlanid>
vcsa-ha-<vlanid>
flex-install-<vlanid>
flexmgr-install-<vlanid>
flex-node-mgmt-<vlanid>
hv-mgmt-<vlandid>
flex-vmotion-<vlanid>
vm-migration-<vlanid>
flex-vsan-<vlanid>
stor-mgmt-<vlanid>
pfmc-sds-mgmt-<vlanid>
Not applicable
pfmc-sds-data1-<vlanid>
Not applicable
pfmc-sds-data2-<vlanid>
Not applicable
pfmc-vmotion-<vlanid>
Not applicable
flex-stor-mgmt-<vlanid>
fos-mgmt-<vlanid>
flex-rep1-<vlanid>
Not applicable
flex-rep2-<vlanid>
Not applicable
flex-data1-<vlanid>
fos-data1-<vlanid>
flex-data2-<vlanid>
fos-data2-<vlanid>
flex-data3-<vlanid>
fos-data3-<vlanid>
flex-data4-<vlanid>
fos-data4-<vlanid>
flex-tenant1-data1-<vlanid>
Not applicable
flex-tenant1-data2-<vlanid>
Not applicable
flex-tenant1-data3-<vlanid>
Not applicable
flex-tenant1-data4-<vlanid>
Not applicable
flex-tenant2-data1-<vlanid>
Not applicable
flex-tenant2-data2-<vlanid>
Not applicable
flex-tenant2-data3-<vlanid>
Not applicable
flex-tenant2-data4-<vlanid>
Not applicable
temp-dns-<vlanid>
temp-dns-<vlanid>
data-prot-<vlanid>
data-prot-<vlanid>
Using an embedded operating system-based jump
server
Depending on when the system was built, it will use either an embedded operating system-based jump server or a Windowsbased jump server. The tools available for each jump server accomplish the same tasks. The procedures in this guide use the
Windows-based jump server. If you are using a system with an embedded operating system-based jump server, refer to this
topic for what tools to use instead.
A
●
●
●
Windows-based jump server configuration uses the following tools:
WinSCP for secure copies
PuTTY for SSH access
Remote Desktop (RDP) for remote login
An embedded operating system-based jump server configuration uses the following tools:
● SCP for secure copies
12
Introduction
●
●
●
●
SSH for login through secure shell
VNC for remote login
Filezilla for secure FTP (interactive SCP is not supported)
Browsers, for example Chrome and Firefox
The following table lists Windows-based tools and the equivalent embedded operating system-based tool location:
Windows-based tool
Embedded operating system-based tool
WinSCP
SCP (from a terminal or console window)
D:\
/shares/
SSH (PuTTY)
SSH (from a terminal or console window)
RDP
VNC
PowerShell (Windows command terminal)
bash (from a terminal or console window)
Install the embedded operating system-based iDRAC tools
Perform this procedure to install the iDRAC tools on an embedded operating system-based jump server.
Steps
1. Locate the embedded operating system-based iDRAC tools and installation instructions on the Dell Technologies Support
site. The latest Linux version is available here.
2. Run the following command on the embedded operating system-based jump box to create a specific symlink to satisfy SSL
requirements:
sudo ln -s /usr/lib64/libssl.so.10 /usr/lib64/libssl.so
When the symlink is in place, RACADM tools will function as expected.
Copy files using SCP
About this task
You can use SCP to copy files, for example a compliance file from the repository to a local folder.
Steps
1. Double-click the Konsole desktop icon to start a command shell.
2. In the new terminal window, use the following SCP command to copy files to the root directory in the target system:
scp /<target path>/<filename or folder> root@<destination>:/<folder to be copied to>
Example:
scp /shares/<compliance file folder>/PowerFlex/3.5.0.2/*.rpm root@10.234.115.22:/tmp
Download files to embedded operating system-based jump server
The following is an example of a procedure to copy the files from the Dell EMC Download Center using an embedded operating
system-based jump server instead of a Windows-based jump server. This example is for reference only.
1. To save the packages, copy the /shares/<compliance file folder>/PowerFlex folder to /shares/
PowerFlexCurrent folder.
The Installation Manager uses these current packages during the installation.
2. On the /shares drive, delete the /shares/<compliance file folder> directory or folder to remove the old
compliance files.
NOTE: 60+ GB of space is required on the /shares drive. Perform this action to make space for the new content.
3. Create a folder named /shares/<compliance file folder>.
Introduction
13
4. Log in to the Dell EMC Download Center and download the compliance files to the /shares/<compliance file
folder>. Ensure you re-create the compliance file directory structure similar to what was previously there.
14
Introduction
2
Monitoring system health
Use PowerFlex Manager to monitor the system health.
PowerFlex Manager includes the following features:
● A dashboard that provides system configuration details and communicates health status for PowerFlex rack infrastructure
elements and services
● Release Certification Matrix (RCM) compliance monitoring and reporting
● RCM remediation for nodes, switches, PowerFlex, VMware ESXi, and CloudLink
● Hardware monitoring and alerting through either Secure Remote Services, email, or SNMP and syslog to Dell EMC Support
● Aggregated logging for troubleshooting, with the ability to send logs through Secure Remote Services to Dell EMC Support
Monitoring system resources
You can use PowerFlex Manager to monitor system health.
The following table describes common tasks for monitoring system health and what steps to take in PowerFlex Manager to
initiate each:
If you want to...
Do this in PowerFlex Manager...
Monitor system resources and health
On the Dashboard, look at the Service Overview and Resource Overview
sections.
View alerts
On the menu bar, click Alerts.
Managing compliance
You can use PowerFlex Manager to manage software and firmware RCM compliance.
The following table describes common tasks and what steps to take in PowerFlex Manager to initiate each:
If you want to...
Do this in PowerFlex Manager...
Monitor software and firmware
compliance
1. Click Services.
2. On the Services page, select a service.
3. On the Details page, under Service Actions, click View Compliance Report.
Perform software and firmware
remediation
1. From the compliance report, view the firmware or software components.
2. Click Update Resources to update non-compliant resources.
Generate a troubleshooting bundle
1. Click Settings and then click Virtual Appliance Management.
2. Click Generate Troubleshooting Bundle.
NOTE: You can also generate the troubleshooting bundle from the Service
page.
Download a report that lists compliance
details for all resources
1. Click Resources.
2. Click Export Report and then click Export Compliance PDF Report or
Export Compliance CSV Report.
Download a configuration report
1. Click Resources.
2. Click Export Report > Export Configuration PDF Report.
Monitoring system health
15
Upgrading software
See the Dell EMC PowerFlex Rack Upgrade Guide for information about incrementally upgrading PowerFlex rack software from
one RCM to the next.
16
Monitoring system health
3
Configuring and viewing alerts
You can configure PowerFlex Manager to receive and display alerts from discovered PowerFlex rack components.
The alert connector is available through PowerFlex Manager. It sends email alerts on the health of PowerFlex nodes securely
through Secure Remote Services. Secure Remote Services routes alerts to the Dell EMC support queue for diagnosis and
dispatch.
When you use the alert connector with Secure Remote Services, critical alerts can automatically generate service requests.
Dell Technologies Services continuously evaluates and updates which alert automatically generate service requests. For more
information, contact Dell Technologies Services.
During node discovery, you can configure iDRAC nodes to automatically send alerts to PowerFlex Manager. PowerFlex Manager
receives SNMP alerts directly from iDRAC and forwards them to Secure Remote Services.
If not done at discovery, you can configure iDRAC nodes to automatically send alerts to PowerFlex Manager by editing the alert
connector settings and selecting the Configure nodes for alert connector option.
PowerFlex Manager fetches telemetry reports from PowerFlex and forwards these reports to Secure Remote Services. The
reports are sent to the Dell Managed File Transfer (MFT) portal and can be leveraged by CloudIQ.
PowerFlex Manager forwards these reports:
●
●
●
●
Configuration report is sent once a day.
Capacity report is sent every hour.
Performance report is sent every five minutes.
Alerts report is sent every five minutes.
CloudIQ is enabled by default, but can be disabled on the Alert Connector page under Settings.
You can configure Dell EMC Networking OS10 switches to automatically send alerts to PowerFlex Manager by selecting the
automatic configuration option during discovery.
You must manually configure CloudLink to send alerts to PowerFlex Manager.
NOTE: The alert connector does not replace any monitoring software that you might already have, including any already
available through the PowerFlex rack such as PowerFlex Connect Home.
As of PowerFlex Manager version 3.3, OpenManage Enterprise is no longer required to connect to Secure Remote Services. If
you use OpenManage Enterprise for other functionality, note that it is no longer installed and we recommend using PowerFlex
Manager instead. In future upgrades, we will recommend removing this software module.
Related information
Configuring SNMP trap and syslog forwarding
Configuring SNMP trap and syslog forwarding
You can configure PowerFlex Manager for SNMP trap and syslog forwarding.
Configure SNMP communication to enable PowerFlex Manager to receive and forward SNMP traps. PowerFlex Manager can
receive SNMP traps from system devices and forward them to one or more remote network management systems.
You can configure PowerFlex Manager to forward syslogs it receives from system components to a remote network
management system. Authentication is provided by PowerFlex Manager, through the configuration settings you provide.
Configuring and viewing alerts
17
Configure SNMP trap forwarding
To configure SNMP trap forwarding, specify the access credentials for the SNMP version you are using and then add the
remote server as a trap destination.
About this task
PowerFlex Manager supports different SNMP versions, depending on the communication path and function. The following table
summarizes the functions and supported SNMP versions:
Function
SNMP version
PowerFlex Manager receives traps from all devices, including iDRAC
v2
PowerFlex Manager receives traps from iDRAC devices only
v3
PowerFlex Manager forwards traps to the network management system
v2, v3
NOTE: SNMPv1 is supported wherever SNMPv2 is supported.
PowerFlex Manager can receive an SNMPv2 trap and forward it as an SNMPv3 trap.
SNMP trap forwarding configuration supports multiple forwarding destinations. If you provide more than one destination, all
traps coming from all devices are forwarded to all configured destinations in the appropriate format.
PowerFlex Manager stores up to 5 GB of SNMP alerts. Once this threshold is exceeded, PowerFlex Manager automatically
purges the oldest data to free up space.
For SNMPv2 traps to be sent from a device to PowerFlex Manager, you must provide PowerFlex Manager with the community
strings on which the devices are sending the traps. If during resource discovery you selected to have PowerFlex Manager
automatically configure iDRAC nodes to send alerts to PowerFlex Manager, you must enter the community string used in that
credential here.
For a network management system to receive SNMPv2 traps from PowerFlex Manager, you must provide the community
strings to the network management system. This configuration happens outside of PowerFlex Manager.
For a network management system to receive SNMPv3 traps from PowerFlex Manager, you must provide the PowerFlex
Manager engine ID, user details, and security level to the network management system. This configuration happens outside of
PowerFlex Manager.
Prerequisites
PowerFlex Manager and the network management system use access credentials with different security levels to establish
two-way communication. Review the access credentials that you need for each supported version of SNMP. Determine the
security level for each access credential and whether the credential supports encryption.
To configure SNMP communication, you need the access credentials and trap targets for SNMP, as shown in the following
table:
If adding...
You must know...
SNMPv2
Community strings by which traps are received and forwarded
SNMPv3
User and security settings
Steps
1. Log in to PowerFlex Manager.
2. On the menu bar, click Settings, and click Virtual Appliance Management.
3. On the Virtual Appliance Management page, in the SNMP Trap Configuration section, click Edit.
4. To configure trap forwarding as SNMPv2, click Add community string. In the Community String box, provide the
community string by which PowerFlex Manager receives traps from devices and by which it forwards traps to destinations.
You can add more than one community string. For example, add more than one if the community string by which PowerFlex
Manager receives traps differs from the community string by which it forwards traps to a remote destination.
18
Configuring and viewing alerts
NOTE: An SNMPv2 community string that is configured in the credentials during discovery of the iDRAC or through
management is also displayed here. You can create a new community string or use the existing one.
5. To configure trap forwarding as SNMPv3, click Add User. Enter the Username, which identifies the ID where traps are
forwarded on the network management system. The username must be at most 16 characters. Select a Security Level:
Security Level
Details
Description
authPassword
privPassword
Minimal
noAuthNoPriv
No authentication and
no encryption
Not required
Not required
Moderate
authNoPriv
Messages are
authenticated but not
encrypted
Required
Not required
Required
Required
(MD5 at least 8
characters)
Maximum
authPriv
Messages are
authenticated and
encrypted
(MD5 and DES both at
least 8 characters)
Note the current engine ID (automatically populated), username, and security details. Provide this information to the remote
network management system so it can receive traps from PowerFlex Manager.
You can add more than one user.
6. In the Trap Forwarding section, click Add Trap Destination to add the forwarding details.
a. In the Target Address (IP) box, enter the IP address of the network management system to which PowerFlex Manager
forwards SNMP traps.
b. Provide the Port for the network management system destination. The SNMP Trap Port is 162.
c. Select the SNMP Version for which you are providing destination details.
d. In the Community String/User box, enter either the community string or username, depending on whether you are
configuring an SNMPv2 or SNMPv3 destination. For SNMPv2, if there is more than one community string, select the
appropriate community string for the particular trap destination. For SNMPv3, if there is more than one user-defined,
select the appropriate user for the particular trap destination.
7. Click Save.
The Virtual Appliance Management page displays the configured details as shown below:
Trap Forwarding <destination-ip>(SNMP v2 community string or SNMP v3 user)
NOTE: To configure nodes with PowerFlex Manager SNMP changes, go to Settings > Virtual Appliance
Management, and click Configure nodes for alert connector.
Configure syslog forwarding
You can configure PowerFlex Manager to forward syslogs it receives from system components to a remote network
management system. PowerFlex Manager provides authentication through the configuration settings you provide.
About this task
You can configure PowerFlex Manager to forward syslogs to up to five destination remote servers. You can set only one
forwarding entry per remote server.
You can apply forwarding filters based on facility type and severity level. For example, you can configure PowerFlex Manager to
forward all syslog messages to one remote server and then forward syslog messages of a given severity to a different remote
server. The default is to forward syslog messages of all facilities and severity levels to the remote syslog server.
Configuring and viewing alerts
19
Prerequisites
Ensure that the system components are configured to send syslog messages to PowerFlex Manager. This configuration happens
outside of PowerFlex Manager.
Ensure that you have the following information:
● Obtain the IP address of the hostname for the remote syslog server and the port where the server is accepting syslog
messages.
● If sending only some syslog messages to a remote server, you must know the facility and severity of the log messages to
forward.
Steps
1. Log in to PowerFlex Manager.
2. On the menu bar, click Settings and click Virtual Appliance Management.
3. On the Virtual Appliance Management page, in the Syslog section, click Edit.
4. Click Add syslog forward.
5. For Host, enter the destination IP address of the remote server to which you want to forward syslogs.
6. Enter the destination Port 514 where the remote server is accepting syslog messages.
7. Select the network Protocol used to transfer the syslog messages. The default is UDP.
8. Optionally enter the Facility and Severity Level to filter the syslogs that are forwarded. The default is to forward all.
9. Click Save to add the syslog forwarding destination.
The Virtual Appliance Management page displays the configured details as shown below:
Syslog Forwarding <destination-ip>(<Facility><Severity Level>)
Edit the alert connector settings
Perform this procedure if you need to modify the alert connector or registration settings.
About this task
When the alert connector is initially configured, discovered nodes are automatically enabled to send critical and error alerts for
node and PowerFlex level events to Secure Remote Services. If resources in the environment are expanded or changed, you
must configure nodes for the alert connector to enable alerting on any of the resources that expanded or changed. These nodes
must be configured for the alert connector in order for Secure Remote Services to receive critical or error alerts for those
resources.
Steps
1. In PowerFlex Manager, click Settings from the menu and then click Virtual Appliance Management.
2. Go to Alert Connector, click Edit, and change the settings as required.
● To temporarily suspend sending alerts to Secure Remote Services, click Edit to the right of the State field. You can
choose to suspend the alerts for 3, 6 12, or 24 hours. You can restart the sending of alerts at any time by clicking
Enabled. The state and time that displays is based on hours selected for suspension and the NTP node settings.
● To deregister the alert connector and stop the collection and sending of alerts to Secure Remote Services, click
Deregister.
3. To configure nodes to send alerts to PowerFlex Manager, click Configure nodes for alert connector.
4. To verify that the alert connector is receiving alerts, click Send Test Alert.
20
Configuring and viewing alerts
View alerts
You can configure PowerFlex Manager to receive alerts from discovered PowerFlex nodes. The alerts display on the Alerts page
in PowerFlex Manager.
About this task
The Alerts page shows only node alerts. It does not include PowerFlex alerts. If you configure the alert connector to include
email notifications, these email notifications include both node alerts and PowerFlex alerts.
Each generated alert displays the following fields:
Message ID
Acknowledged
Severity
Contains a prefix
and number that
is unique to each
alert
Displays an icon
when the alert is
acknowledged
Critical indicates that you
should take immediate action
to address the issues
Hover over the
icon to see
the user who
acknowledged the
alert
Unknown error indicates
either the action that you
should take to address the
issue or a recommended
work-around to reduce impact
Message
Category
Describes the
event
Indicates the
alert category:
configuration,
system health,
updates, or
miscellaneous
Warning indicates that you
should schedule time to
address the issue
Informational indicates that
the event requires no action
For a list of alert messages and recommended response actions, see Alert messages and recommended actions.
PowerFlex Manager stores and displays up to 5 GB of SNMP alerts. Once this threshold is exceeded, PowerFlex Manager
automatically purges the oldest data to free up space.
Steps
On the menu bar, click Alerts.
How alerts are acknowledged
The alert connector automatically acknowledges alerts. When the alert connector picks up an alert and sends it home through
Secure Remote Services, it receives a Secure Remote Services acknowledgement and sets the alert to acknowledged.
The alert appears as acknowledged on the Alerts page in PowerFlex Manager.
To remove an alert acknowledgement, click Unacknowledge. The alert connector sends the alert home again at its next polling
interval. It then resets the alert back to acknowledged. You might remove an acknowledgement to resend an alert when there is
a question as to whether it was successfully sent.
Alert acknowledgement prevents repeatedly sending home the same alerts. Sending duplicate alerts to Dell EMC might result in
duplicate Service Request (SR) case creation and/or part dispatch.
Alert messages and recommended actions
Use the code in the alert messages PowerFlex Manager generates to identify and correct issues. The alert message data
contains information specific to the generated alert.
For PowerFlex node iDRAC codes, see the Events and Error Message Reference Guide for Dell EMC PowerEdge Servers.
Message ID
Message
Severity
Recommended response action
VXOS0001
SDS has been in maintenance mode
for more than 30 minutes.
Error (4)
Exit maintenance mode for the SDS in the PowerFlex
cluster.
Configuring and viewing alerts
21
Message ID
Message
Severity
Recommended response action
<SDS-ip>
<PowerFlex Gateway>-<Service
Tag>
PFXM01
Appliance storage is within optimal
parameters.
Info
No action is needed.
PFXM02
Storage usage is between 75 and
90 percent. It is recommended that
unnecessary files be removed.
Warning
Remove unnecessary compliance files or operating system
images. If additional components are added to the cluster,
consider disk expansion.
PFXM03
Storage usage is above 90 percent
Critical
and at a critical state. Remove any
unnecessary files from the appliance.
Remove unnecessary compliance files or operating system
images. Consider disk expansion if additional components
are added to the cluster. If space remains critical, contact
Dell Technologies Support.
PFXM04
The operation cannot be completed
because of an error during a
firmware or software update.
Retry the operation. If the problem persists, download the
latest update package from your service provider and retry
the operation again.
Critical
To determine the reason for the error, look at the user
interface logs or the console logs. PowerFlex Manager
provides the name of the resource that failed, so look for
logs that are related to that resource to identify the error.
PFXM05
The operation cannot be completed Critical
because of an error during a
PowerFlex Manager virtual appliance
update.
Check the network connection and access to the update
package. Retry the operation. If the error persists, contact
your service provider.
To determine the reason for the error, look at the user
interface logs or the console logs.
NIC100
The slot/integrated NIC X port X
network link is down.
Moderate
Bring the switch port that is down.
NIC101
The slot/integrated NIC X port X
network link is started.
Moderate
No action is needed.
TST001
The iDRAC generated a test trap
event in response to a user request.
Moderate
No action is needed.
PowerFlex alert messages
You can view the PowerFlex generated messages in SNMP, PowerFlex GUI, REST, and Secure Remote Services.
To view PowerFlex alert messages, see Monitor Dell EMC PowerFlex v3.5.x. The PowerFlex Alerts in SNMP, PowerFlex GUI
REST, and SRS topic summarizes the following alert message information:
● Alert message in PowerFlex GUI
● Alert message in REST
● Alert message in SNMP Trap
● Alert code (for Secure Remote Services)
● Severity
● Recommended action
22
Configuring and viewing alerts
4
Managing components with PowerFlex
Manager
Once PowerFlex Manager is configured, you can use it to perform the ongoing tasks necessary to manage your PowerFlex rack.
The following table describes typical tasks for managing PowerFlex rack components and what steps to take in PowerFlex
Manager to initiate each:
If you want to...
Do this in PowerFlex Manager...
View network topology
1. Click Services.
2. On the Services page, select a service.
3. On the Details page, click the Port View tab.
Run inventory (PowerFlex nodes,
switches, PowerFlex Gateway, and
VMware vCenter cluster).
1. Click Resources and click the All Resources tab.
2. Click the check box for the resource you want to update and then click Run
Inventory.
3. After running the inventory, click Update Service Details on the Services
page for any service that requires the updated resource data.
If you rename objects outside of PowerFlex Manager (such as a VMware ESXi host,
volume, datastore, VDS, port group, data center, or cluster), you should run the
inventory and update the details for any service that requires the updates.
Add an existing service
1. Click Services.
2. Click +Add Existing Service.
Perform PowerFlex node expansion
1. Click Services.
2. On the Services page, select a service.
3. On the Details page, under Resource Actions, expand the Add Resources list
and click Node.
The procedure is the same for new services and existing services.
Remove a PowerFlex node
1. Click Services.
2. On the Services page, select a service.
3. On the Service Details page, under Resource Actions, click Remove
Resource.
4. Select the node to remove and click Next.
5. Select Delete Resource for the Resource removal type.
Enter service mode
1. Click Services.
2. On the Services page, select a service.
3. On the Service Details page, under Service Actions, click Enter Service
Mode.
Exit service mode
1. Click Services.
2. On the Services page, select a service.
3. On the Service Details page, under Service Actions, click Exit Service
Mode.
Reconfigure MDM roles
1. Click Services.
2. On the Services page, select a service.
3. On the Service Details tab, select a node and click Node Actions >
Reconfigure MDM Roles or click Reconfigure MDM Roles under Service
Actions.
Managing components with PowerFlex Manager
23
PowerFlex Manager limits
●
●
●
●
●
Maximum number of nodes in PowerFlex Manager: 384
Maximum number of nodes in a service: 32
Maximum number of volumes in a service: 1024 for hyperconverged and 32,000 for storage-only
Maximum number of networks in a service: 400
Maximum number of discovered resources
○ Switches: 10
○ CloudLink Centers: 4
○ VMware vCenters: 6
● PowerFlex Manager settings limits
○ NTP: 4 for PowerFlex Manager appliance; Node operating system limitations 1-3, PowerFlex Manager sets 1 during
deployment
○ SMTP: 1 per PowerFlex Manager appliance for alert connector
○ SNMP: 20 communities
○ Syslog: 4 remote syslog servers
Managing external changes
If you make manual changes outside of PowerFlex Manager, you must perform some steps within PowerFlex Manager. The
steps ensure that the external changes are reflected within the user interface and the environment is kept in a healthy state.
The following table describes common tasks for managing external changes and what steps to take in PowerFlex Manager:
If you have...
Do this in PowerFlex Manager...
Manually replaced a node outside of
PowerFlex Manager for a service in
PowerFlex Manager
1. Remove the node from the service.
On the Services page, select the service. On the Details page, under
Resource Actions, click Remove Resource.
2. Remove the node from the list of resources.
On the Resources page, click the All Resources tab. From the list of
resources, select the resource, and click Remove.
3. Discover the new node.
On the Resources page, click Discover on the All Resources tab.
4. Update the service details.
On the Services page, click Update Service Details.
Manually created additional VLANs on
1. Remove the service.
the VDS in vCenter outside of PowerFlex
On the Service Details page, click Remove Service under Service Actions.
Manager
When you click Remove Service, do not select Delete Service because the
Service Removal Type deletes the service and makes configuration changes
to the nodes, switch ports, virtual machine managers, and PowerFlex. Instead,
select Remove Service as the Service Removal Type. Then, to keep the
nodes in the inventory, select Leave nodes in PowerFlex Manager inventory
and set state to and choose Managed.
For more information about removing a service, see Removing a service in the
PowerFlex Manager online help.
2. Rediscover the service.
On the Services page, click +Add Existing Service.
If you do not choose the correct options when you remove the service, you could
delete the service and destroy it, or leave the servers in an unmanaged state and
not be able to add the existing service.
24
Managing components with PowerFlex Manager
If you have...
Do this in PowerFlex Manager...
Manually created volumes outside of
PowerFlex Manager
Click Update Service Details on the Services page for any service that requires
the updated volumes.
Manually deleted volumes outside of
PowerFlex Manager
Click Update Service Details on the Services page for any service that
requires updated information about volumes deleted. PowerFlex Manager displays a
message indicating that the volumes have been removed from the service.
Manually added nodes outside of
PowerFlex Manager
Click Update Service Details on the Services page for any service that requires
the updated nodes.
Manually removed nodes outside of
PowerFlex Manager
Click Update Service Details on the Services page for any service that requires
updated information about nodes deleted. Then, manually remove the resource on
the Resources page by clicking the All Resources tab, selecting the resource, and
clicking Remove.
Renamed objects such as ESXi host,
Click Update Service Details on the Services page for any service that requires
volume, datastore, VDS, port group, data the updates.
center, cluster, and so forth
Manually changed the IP address for a
switch
1. Remove the switch.
On the Resources page, click the All Resources tab. From the list of
resources, select the switch, and click Remove.
2. Rediscover the switch.
On the Resources page, click the All Resources tab. Click Discover.
3. If the IP address is in use, remove the service and add it again.
On the Services page, click +Add Existing Service.
Manually added an incorrect network to
a service
Manually delete the network on the switch or VMware vCenter. Then remove the
service in PowerFlex Manager, and click +Add Existing Service to reflect the
network deletion change.
Performing maintenance activities in a PowerFlex
cluster
You place a node in maintenance mode to repair, replace, or upgrade hardware components for the customer and management
clusters.
For more information, see Data assurance during maintenance.
When performing maintenance on PowerFlex nodes, there are three maintenance options:
Mode
Description
Instant maintenance mode
Perform short-term maintenance that lasts less than 30
minutes. It is designed for quick entry to and exit from a
maintenance state. The node is immediately and temporarily
removed from active participation.
Use for scenarios such as non-disruptive, rolling upgrades,
where the maintenance window is only a few minutes (for
example, a reboot) and there are no known hardware issues.
Protected maintenance mode
Perform maintenance or updates that require longer than 30
minutes in a safe and protected manner. PowerFlex makes a
temporary copy of the data, providing data availability without
the risk of exposure of an accessible single copy.
Evacuate the node from the cluster
Default method prior to PowerFlex version 3.5. Data is
migrated to other nodes in the cluster.
Managing components with PowerFlex Manager
25
Instant maintenance mode (IMM)
In instant maintenance mode, the data on the node undergoing maintenance is not removed from the cluster. However, this data
is not available for use for the duration of the maintenance activity. Instead, extra copies of data residing on the other nodes are
used for application reads.
The existing data on the node being maintained is, in effect, frozen on the node. This is a planned operation that does not
trigger a rebuild. Instead, the MDM instructs the SDCs where to read and write IOs intended to be directed at the node in
maintenance.
A disadvantage of instant maintenance mode is that it introduces a risk of having only a single copy of data available during
maintenance activity. During instant maintenance mode, there are always two copies of data. However, any copy residing on the
node in maintenance is unavailable for the maintenance duration.
When exiting instant maintenance mode, you do not need to rehydrate the node completely. You need to only sync back
any relevant changes that have occurred and reuse all the unchanged data on the node. This results in a quick exit from
maintenance mode and quick return to full capacity and performance.
Protected maintenance mode (PMM)
Protected maintenance mode initiates a many-to-many rebalancing process. Data is preserved on the node entering
maintenance, and a temporary copy of the data is created on the sustaining nodes. Data on the node in maintenance is frozen
and inaccessible. Protected maintenance mode maintains two copies of data at all times, avoiding the risks from the single copy
in instant maintenance mode.
During protected maintenance mode, changes are tracked only for writes that affect the SDS under maintenance mode (what
does this mean). When exiting the SDS from maintenance mode, only the changes that occurred during maintenance need to be
synced to the SDS.
Due to the creation of a temporary third data copy, protected maintenance mode requires more spare capacity than instant
maintenance mode. Account for this spare capacity during deployment if you plan to use protected maintenance mode. There
must be enough spare capacity to handle at least one other node failure, as protected maintenance mode cycles might be long
and other elements could fail.
Protected maintenance mode makes the best use of all unused, available capacity, as it uses both the allocated spare capacity
and any generally free capacity. It does not ignore capacity requirements. Nodes entering protected maintenance mode or in the
same fault set may have degraded capacity.
The following equation summarizes the minimum requirements: Free + spare - 5% of the storage pool >= protected maintenance
mode node size
Use the following command to get the system information for this calculation: scli --query all
Eject the node from the cluster
When a node is gracefully removed using the UI or CLI, a many-to-many rebalance operation between nodes begins. This
ensures that there are two copies of all data on all other nodes before the node being maintained is dropped from the cluster.
Data is fully protected as there are always two available copies of the data.
You may need to adjust the spare capacity assigned to the cluster overall, as the data rebalancing uses up free spare capacity
on the other nodes. For example, if you start with 10 nodes and 10% spare capacity, running with nine nodes requires 12%
spare capacity to avoid an insufficient spare capacity alert. Spare capacity must be equal to or greater than the capacity of the
smallest unit (node).
During maintenance, the cluster functions normally, but with one less node and therefore less capacity and lower performance.
Data writes are sent to and mirrored on the other nodes. It does not matter how long the maintained node is offline, as it is no
longer a part of the cluster. There is no exposure or risk of data unavailability if a problem arises that prohibits the node from
being re-added.
General restrictions and limitations:
● Do not put two nodes from the same protection domain simultaneously into instant maintenance mode or protected
maintenance mode.
● You cannot mix protected maintenance mode and instant maintenance mode on the same protection domain.
26
Managing components with PowerFlex Manager
● For each protection domain, all SDS concurrently in protected maintenance mode must belong to the same fault set. There
are no inter-protection domain dependencies for protected maintenance mode.
● You can take down one SDS or full fault set in protected maintenance mode.
Entering protected maintenance mode
Use this procedure to enter protected maintenance mode using PowerFlex Manager.
Steps
1. Log in to PowerFlex Manager.
2. On the Services page, select a service, and click View Details in the right pane.
3. Click Enter Service mode under Service Actions.
NOTE: The service should have at least three nodes to enter into protected performance maintenance mode using
PowerFlex Manager.
4. Select one or more nodes on the Node Lists page, and click Next.
NOTE: For an environment with Fault set, PowerFlex Manager can put a single node or full fault set in protected
maintenance mode. For an environment without Fault sets, PowerFlex Manager can put a four node minimum in
protected maintenance mode.
5. Select Protected Maintenance Mode.
6. Click Enter Service Mode.
7. Verify that the node shows as Service Mode (Protected Maintenance) in PowerFlex Manager.
Exiting protected maintenance mode
Use this procedure to exit protected maintenance mode using PowerFlex Manager.
Steps
1. Log in to PowerFlex Manager.
2. On the Services page, select the service.
3. Click Exit Service Mode.
Data assurance during maintenance
Use these guidelines to guarantee your data is safe during maintenance operations.
The following table provides guidance for the available data assurance mechanisms when performing maintenance operations. It
also indicates whether the option is available for PowerFlex management controller 2.0.
NOTE: If using a version of PowerFlex prior to 3.5, protected maintenance mode is not available. If the maintenance
window is greater than 30 minutes, use the eject node option.
Maintenance
operation
Considerations
Higher risk - IMM Lower risk - PMM Eject the node
from the cluster
Node reboot
Generally quick and Acceptable option
do not involve risk
Conservative
approach
Unnecessary
Yes
Node upgrade:
firmware
Vary in time
depending on
components being
upgraded
Recommended.
Use if
upgrading multiple
components,
including longrunning firmware
Unnecessary
Yes
Use for brief
upgrades of
single components
needing a reboot
(BIOS)
Applicable for
PFMC 2.0
Managing components with PowerFlex Manager
27
Maintenance
operation
Considerations
Higher risk - IMM Lower risk - PMM Eject the node
from the cluster
Applicable for
PFMC 2.0
Node upgrade: OS
and SDC
Upgrade and/or
patches can be
applied in under 30
minutes per node
Acceptable for
brief patch
applications
Recommended in
most situations to
provide additional
protection
Unnecessary
Yes
Node upgrade:
firmware and OS
and SDC and
CloudLink agent
Mixed upgrade
approach
combining all
updates with a
single reboot
Not recommended
as most upgrades
will not complete
within 30 minutes
Recommended
Unnecessary
Yes
Network changes:
restart network,
adjust MTU, add/
remove VLANs
Network changes
are typically quick
but interrupt
connectivity to
nodes
Acceptable for
quick updates
Recommended in
most cases.
Provides additional
protection
Unnecessary
Yes
PowerFlex
software upgrade
Upgrade of
Acceptable
components should for software
take less than one
component update
minute per node, if
no issues occur
NOTE: SDC is
upgraded as
part of node
OS upgrade
Provides additional
protection to
handle any
hardware failure
Unnecessary
Yes
Other operations
Use the expected
activity time to
guide the decision
of which mode to
use
Use if greater than
30 minutes
Use if there
are other
considerations or
do not expect to
return the node to
the cluster
Yes
Use if under 30
minutes
CloudLink agent is
not applicable for
controller nodes
Redistribute the MDM cluster using PowerFlex
Manager
Redistribute the MDM across clusters to ensure maximum cluster resiliency and availability.
About this task
It is critical that the MDM cluster is distributed across access switches and physical cabinets to ensure maximum resiliency and
availability of the cluster. The location of the MDM components should be checked and validated during every engagement and
adjusted if found noncompliant with the published guidelines.
When adding new MDM or tiebreaker nodes to a cluster, first place the MDM components on the PowerFlex storage-only nodes
(if available). Then, place the components on the PowerFlex hyperconverged nodes.
Use PowerFlex Manager to change the MDM role for a node in a PowerFlex cluster. When adding a node to a cluster, you might
want to switch the MDM role from one of the existing nodes to the new node.
You can launch the wizard for reconfiguring MDM roles from the Services page or from the Resources page. The nodes that
are listed and the operations available are the same regardless of where you launch the wizard.
Steps
1. Log in to PowerFlex Manager.
2. Access the wizard from the Services page or the Resource page. Click Services or Resources from the menu bar to
access the wizard.
a. Select the service or resource with the PowerFlex Gateway containing the MDMs.
28
Managing components with PowerFlex Manager
b. Click View Details.
c. Click Reconfigure MDM Roles. The MDM Reconfiguration page displays.
3. Review the current MDM configuration for the cluster.
4. For each MDM role that you want to reassign, use Select New Node for MDM Role to choose the new hostname or IP
address. You can reassign multiple roles at a time.
5. Click Next. The Summary page displays.
6. Type Change MDM Roles to confirm the changes.
7. Click Finish.
Remove a service
Use this procedure to remove a service that is no longer required.
About this task
PowerFlex Manager supports two types of removal:
● Delete the entire service, which deletes the deployment information and also makes any required configuration changes for
components that are associated with the service.
● Remove just the deployment information for the service without making any configuration changes to the deployed
components.
If DAS Cache is installed on a node, or if the node has a VMware NSX-T or NSX-V configuration, you can remove the
deployment information for a service, but not delete the service entirely. PowerFlex Manager also does not allow you to delete a
service if the PowerFlex Gateway used in the service is currently being updated on the Resources page.
Standard users can delete only the services that they have deployed.
NOTE: If you choose Delete Service, you must remove the MDM role from the PowerFlex Gateway manually:
1. Log in to the PowerFlex Gateway using SSH.
2. Type vi /opt/emc/scaleio/gateway/webapps/ROOT/WEB-INF/classes/gatewayUser.properties to
edit the file and remove the MDM IP address and system ID.
3. Save the file and restart the gateway service by typing service scaleio-gateway restart.
4. Log in to PowerFlex Manager, and run the inventory on the PowerFlex Gateway associated with the service.
Steps
1. On the menu bar, click Services.
2. On the Services page, click the service, and in the right pane, click View Details.
3. On the Service Details page, in the right pane, under Service Actions, click Remove Service.
4. In the Remove Service dialog box, select the Service removal type:
● Delete Service makes configuration changes to the nodes, switch ports, virtual machine managers, and PowerFlex to
unconfigure those components. Also, it returns the components to the available inventory.
● Remove Service removes deployment information, but does not make any configuration changes to the nodes, switch
ports, virtual machine managers, and PowerFlex. Also, it returns the components to the available inventory.
5. If you select Remove Service, perform the following steps:
a. To keep the nodes in the inventory, select Leave nodes in PowerFlex Manager inventory and set state to and select
the state:
● Managed
● Unmanaged
● Reserved
b. To remove the nodes, select Remove nodes from the PowerFlex Manager inventory.
c. Click Remove.
6. If you select Delete Service, perform the following steps:
a. Select Delete Clusters(s) and Remove from vCenter to delete and remove the clusters from VMware vCenter.
Managing components with PowerFlex Manager
29
b. Select Remove Protection Domain and Storage Pools from PowerFlex to remove the protection domain and
storage pools that are created during the service deployment.
If you select this option, you must select the target PowerFlex Gateway. The PowerFlex Gateway is not removed.
PowerFlex Manager removes only the protection domain and storage pools that are part of the service. If multiple
services are sharing a protection domain, you might not want to delete the protection domain.
For a compression enabled service, PowerFlex Manager deletes the acceleration pool and the DAX devices when you
delete the service.
c. Select Delete Machine Group and remove from CloudLink Center to clean up the related components in CloudLink
Center.
CloudLink Center cleanup includes the deletion of the machine group, keystore, and approved network that is related
to the service being deleted. These components are removed from CloudLink Center only if all the machines that are
associated to the machine group are deleted first. If all the machines that are related to this machine group are not
removed, the cleanup does not succeed as there are machines associated with the machine group.
d. If you are certain that you want to proceed, type DELETE SERVICE.
e. Click Delete.
30
Managing components with PowerFlex Manager
5
Administering the network
Perform these procedures to administer the PowerFlex rack network.
Add a network to a service
You can add an available network to a service or choose to define a new network for a configuration that was initially deployed
outside of PowerFlex Manager. You cannot remove an added network using PowerFlex Manager.
About this task
Before you can add a network to a service, define the network.
You can add a static route to allow nodes to communicate across different networks. The static route can also be used to
support replication in storage-only and hyperconverged services.
Prerequisites
Ensure that a new VLAN is created on any switches that need access to that VLAN and is added to any management cluster
server-facing ports. The VLAN is then added it to any northbound trunks to other switches that it must communicate with.
Steps
1. Log in to PowerFlex Manager.
2. On the menu bar, click Services.
3. Select a service for which you want to add a network and in the right pane, click View Details.
4. Under Resource Action, from the Add Resources list, click Add Network.
The Add Network window is displayed. All used resources and networks are displayed under Resource Name and
Networks.
5. Click Add Additional Network to add an additional network:
a. From the Available Networks list, select the network, and click Add.
The selected network is displayed under Network Name. You can define a new network by clicking Define a New
Network.
b. Select Port Group from the Select Port Group list.
c. Click Save.
6. Click Add Additional Static Route to add an additional static route:
a. Click Add New Static Route.
b. Select a Source Network.
The source network must be a PowerFlex data network or a replication network.
c. Select a Destination Network.
The destination network must be a PowerFlex data network or a replication network.
d. Type the IP address for the Gateway.
e. Click Save.
Administering the network
31
Administering the network outside of PowerFlex
Manager
Optimizing the network
Perform the following procedures to optimize network performance.
Configuring the Storage Data Server network interface card with VMware
ESXi
Configure the Storage Data Server (SDS) network interface card (NIC) to optimize network performance with VMware ESXi.
About this task
After performing this procedure, each SDS will have two NICs corresponding to the two different PowerFlex data paths. NICs
corresponding to same data path belong to the same subnet.
The dvswitch names are for example only and may not match the configured system. Do not change these names or a data
unavailable or data lost event may occur.
Steps
1. Select any one of the following to configure the data network in VMware vSphere:
● For non-bonded NIC, select dvswitch1, and add flex-data1-<vlanid>, and select dvswitch2, and add flex-data2-<vlanid>.
● For static bonding NIC, select flex_dvswitch, and add flex-data1-<vlanid> and flex-data2-<vlanid>.
● For LACP bonding NIC, select flex_dvswitch, and add flex-data1-<vlanid>, flex-data2-<vlanid>, flex-data3-<vlanid>, and
flex-data4-<vlanid>.
2. Right-click Configure, and click Edit Settings.
3. Verify the VLAN Type depending on the network:
NOTE: The VLANs specified are for representation purpose only.
● For legacy network, verify VLAN Type is set to None.
● For static bonding NIC, verify VLAN Type is set to VLAN as 151 for flex-data1 and VLAN as 152 for flex-data2.
● For LACP bonding NIC, verify the VLAN Type is set to VLAN as 151 for flex-data1, 152 for flex-data2, 153 for
flex-data3, and 154 for flex-data4.
Refer to the LCS for the actual VLAN values.
4. Select dvswitch2 > flex-data2- <vlanid> .
NOTE: This step does not apply to PowerFlex with the static bonding and LACP bonding NIC.
5. Verify VLAN Type is set to None.
6. Type the following:
● In the Cisco NX-OS CLI:
# sh run int E1/1/1
interface Ethernet1/1/1
description To R730-1
switchport
switchport access vlan 152
no shutdown
NOTE: The LACP bonding NIC port design configurations use switch port mode trunk. Use sh run int e1/1 in
the absence of breakout cables.
32
Administering the network
In the following example, on network switch hop171-vxrfd-3164a, Ethernet1/1/1 is
configured as an access port and drops a packet that is tagged with an 802.1Q header.
All switches may not have 1/1/1. So, in such cases, type sh run int E1/1.
● In the Dell OS CLI:
# sh run int eth1/1/1
interface Ethernet1/1/1
description To R730-1
switchport
switchport access vlan 152
no shutdown
Configuring the VMware ESXi port group
Configure the VMware ESXi vNetwork Distributed Switch (VDS) and virtual adapter MTU to optimize performance.
About this task
The dvswitch names are for example only and may not match the configured system. Do not change these names or a data
unavailable or data lost event may occur.
Steps
1. Verify that the VDS dvswitch0 is set to 1500. For static bonding and LACP bonding NIC port design, dvswitch0 is seen as
cust_dvswitch.
2. Verify that the VMkernel (VMK) interfaces is set as follows for non-bonded NIC port design:
a. Verify that the VMK interfaces on dvswitch0 is set to 1500.
b. For LACP bonding NIC port design, verify that the MTU is set to 9000.
c. Verify that the VMK interfaces on dvswitch1 and dvswitch2 for the PowerFlex data paths are set to 9000.
● The dvswitch2 is not applicable for static bonding and LACP bonding NIC port design.
● For static bonding NIC port design, dvswitch1 is seen as flex_dvswitch, and is applicable for flex-data1-<vlanid> and
flex-data2-<vlanid>.
● For LACP bonding NIC port design, dvswitch1 is seen as flex_dvswitch, and is applicable for flex-data1-<vlanid>,
flex-data2-<vlanid>, flex-data3-<vlanid>, and flex-data4-<vlanid>.
3. Verify that the VMkernel (VMK) interfaces is set as follows static bonding and LACP bonding NIC port design:
a. Verify that the VMK interfaces on cust_dvswitch is set to 1500.
4. Select Load Balancing > Route based on IP hash for all ports.
5. Change the MTU values for the cust_dvswitch to 9000:
a. Log in to VMware vCenter using administrator credentials.
Administering the network
33
b. Click Networking > cust_Dvswitch .
c. Right-click and select Edit Settings.
d. Click Advanced and change the MTU value to 9000.
6. Change MTU to 9000 for vMotion VMK:
a.
b.
c.
d.
e.
f.
Click Hosts and Clusters.
Click the node and click Configure.
Click Vmkernel adapters under Networking.
Select the vMotion VMK and click Edit.
From the Port Properties tab, change the MTU to 9000.
Repeat for other nodes.
NOTE: This is optional for MGMT VMK. To use or implement Jumbo frames on MGMT VMK, repeat this procedure
for MGMT VMK.
Switch
Default/Current MTU
Recommended MTU
Dell PowerSwitch
-
9216
Cisco Nexus
-
9216
cust_dvswitch
1500
9000
vMotion VMK
1500
9000
mgmt
1500
9000
Monitoring the network
Perform these procedures to monitor the network.
Measuring the bi-directional bandwidth between VMware ESXi hosts in the
PowerFlex node
Use Iperf to measure the maximum possible bandwidth between VMware ESXi hosts in the PowerFlex node.
About this task
Iperf is a component of the VMware vSphere vSAN Health check plug-in.
Steps
1. Type the following to disable the firewall on all VMware ESXi hosts.
esxcli network firewall set --enabled false
2. Type the following to create a copy of Iperf3 in the same directory in each host:
cp usr/lib/vmware/vsan/bin/iperf3 /usr/lib/vmware/vsan/bin/iperfcopy
3. Type the following to note the IP address of the interface speed:
esxcli network ip interface ipv4 get
4. Type the following to test the VMkernel connectivity between the VMware ESXi hosts:
# vmkping IP address
34
Administering the network
5. Type the following to use the copy of Iperf3 as a server:
./ usr/lib/vmware/vsan/bin/iperf3.copy –s -B IP address
where IP address is the IP address of the interface to test performance and VMware ESXi as the server (-s) and bound (-B)
it to the interface on the VMware ESXi with IP address.
You will see the following message:
Server listening on 5201
6. Type the following to use the copy of Iperf as a client on another server:
/ --usr/lib/vmware/vsan/bin/iperf3.copy –c IP address
where IP address is the IP address of the iperf3 server in the previous step.
After the test is complete, go to the iperf3 server (first VMware ESXi) and check the speed. Quit the iperf3 session by
typing control + C.
7. Repeat these steps for each of the hosts.
8. When you are finished, type the following to enable the firewall on all VMware ESXi hosts:
esxcli network firewall set --enabled true
Verifying connectivity between Storage Data Servers and Storage Data
Clients
Verify the connectivity between each Storage Data Server (SDS) and Storage Data Client (SDC) by pinging each one.
About this task
An SDC is a lightweight driver that is deployed directly on VMware ESXi and an SDS is a virtual appliance that is deployed on
every host providing storage to the PowerFlex storage pool.
Steps
1. Start an SSH session with a VMware ESXi host using PuTTY or a similar SSH client.
2. Log in to the host using root.
3. Ping each SDC and SDS using a 9K packet.
4. Repeat for each VMware ESXi host.
5. Start an SSH session with an SDS host using PuTTY or a similar SSH client.
6. Log in to the host using root.
7. Ping each SDS.
8. Ping any jumbo frames that are configured, type:
From VMware ESXi server: vmkping -d -s 8972 x.x.x.x
From Linux: ping -M do -s 8972 [destinationIP]
9. Repeat for each SDS host.
Administering the network
35
Checking the maximum transmission unit on all switches and servers
Maximum Transmission Unit (MTU) is the largest physical packet size, measured in bytes, that a network can transmit. Any
messages larger than the MTU are divided into smaller packets before transmission.
Checking the maximum transmission unit on a physical switch
Maximum Transmission Unit (MTU) is the largest physical packet size, which is measured in bytes, that a network can transmit.
Any messages larger than the MTU are divided into smaller packets before transmission.
Steps
1. From the Cisco NX-OS switch CLI, log in to the switch you want to check.
2. Type the following to obtain the MTU for a specific switch:
show interface port name | grep -i MTU
3. Type the following to check the MTU for the port channels for a static or an LACP bonding NIC port design:
show interface <portchannel> |grep -i MTU
Checking the maximum transmission unit on a VMkernel port
Maximum Transmission Unit (MTU) is the largest physical packet size, measured in bytes, that a network can transmit. Any
messages larger than the MTU are divided into smaller packets before transmission.
Steps
1. If you are using subsequent versions of VMware vSphere 6.x, navigate to the VMware ESXi host and perform the following
steps:
a.
b.
c.
d.
e.
f.
Click the Configure tab, and click Networking.
Select VMkernel adapters.
Select the VMkernel adapter from the table.
Click Edit.
Click Port properties.
Verify the MTU setting and click OK.
2. If you are using VMware vSphere 6.x, navigate to the VMware ESXi host and perform the following steps:
a.
b.
c.
d.
e.
f.
Click the Manage tab, and click Networking.
Select VMkernel adapters.
Select the VMkernel adapter from the table.
Click Edit.
Select NIC Settings and select MTU.
Verify the MTU setting and click OK.
Checking the maximum transmission unit on all portgroups or ports
Maximum Transmission Unit (MTU) is the largest physical packet size, which is measured in bytes, that a network can transmit.
Any messages larger than the MTU are divided into smaller packets before transmission.
Steps
1. From VMware vCenter Server, click the Home tab, and click Inventory > Networking.
2. Right-click the vSphere Distributed Switch (VDS) , and select Settings > Edit Settings.
3. Click the Properties tab, and select Advanced.
4. Verify the MTU setting.
5. Click OK.
36
Administering the network
Gathering logs from the Cisco Nexus network for troubleshooting
It is important to export the technical support file from your Cisco Nexus switch before attempting to troubleshoot the switch
or contact Dell EMC Support.
About this task
Use the show tech-support command to obtain the technical support file from your Cisco Nexus switch, which contains log
output and a snapshot of the switch at the time of failure.
Steps
1. Open an SSH session with the Cisco Nexus switch using PuTTY or a similar SSH client.
2. Log in with admin or other credentials with privileges to run show tech-support.
3. Right-click the menu bar and select Change Settings… and enter the following information:
● Session logging: select All session output
● Log file name: putty.log
4. Click Apply.
5. In the Cisco NX-OS CLI, type the following:
show tech-support details | no-more
show tech-support vpc | no-more
show process cpu history | no-more
Gathering logs from the Dell EMC network for troubleshooting
It is important to export the technical support file from your Dell EMC PowerSwitch switch before attempting to troubleshoot
the switch or contact Dell EMC support.
About this task
Use the show tech-support command to obtain the technical support file from your Dell EMC PowerSwitch switch, which
contains log output and a snapshot of the switch at the time of failure.
Steps
In the Dell OS CLI, type the following:
show tech-support
show processes node-id node-id-number [pid process-id]
Adding a VLAN to the network
Add a VLAN to a Cisco Nexus production network
Perform the following sequential procedures to add a VLAN to the production Cisco Nexus network.
1. Identify the servers in the production compute cluster and the corresponding access switches to which they are connected.
2. Add a VLAN to the following:
a. Uplink port channels
b. Virtual port channel (vPC) peer links
c. Server port channels on each access switch
3. Add a distributed port group to the PowerFlex rack production to the VMware ESXi compute cluster. This will establish
end-to-end connectivity from the VM to the production network.
4. Verify the VLAN.
Administering the network
37
Preparing to add a VLAN to the network
Verify certain configurations are in place before adding a VLAN to the network.
Steps
1. Verify there is a Cisco Nexus Virtual Port Channel (vPC) between all access switches.
2. Verify the port channel between servers/hosts and access switches are all configured.
3. Verify the port channel between customer network switches and access switches are all configured.
4. Verify the VLAN is created and enabled on the port-channel between the network switches and the access switches.
5. Verify SVI is configured on the network switches.
6. Verify you have one VLAN IP address, mask, and gateway to verify the VLAN configuration.
Identifying the leaf-spine switches
Identify the leaf-spine switches as part of adding a VLAN to the network.
Prerequisites
NOTE: Perform this procedure only if you are using a leaf-spine network design.
Steps
1. Identify the servers in the production PowerFlex compute-only node and the corresponding leaf-spine switches to which
they connect.
The following diagram shows a three-spine network design with a Cisco Nexus 9336C-FX2, Cisco Nexus 9364C, or Cisco
Nexus 9364C-FX switch as a spine switch and a Cisco Nexus 93240YC-FX2, Cisco Nexus 93180YC-FX, 93180YC-EX, or
Cisco Nexus 9364C as a leaf switch with vPC.
38
Administering the network
2. Using the management aggregation switch:
Administering the network
39
40
Administering the network
Configuring the access switches
Configure the access switches as part of adding a VLAN to the network.
About this task
Using the management aggregation switch:
Administering the network
41
Steps
1. From the Cisco NX-OS switch CLI, type the following for general configurations:
ToR
ToR
ToR
ToR
Switch-A#conf t
Switch-A(config)# vlan vlan-id
Switch-A(config-vlan)# name vlan-name
Switch-A(config-vlan)# end
2. Type the following to configure the vPC peer-link:
ToR
ToR
ToR
ToR
ToR
ToR
ToR
42
Switch-A#conf t
Switch-A(config)# interface port-channel vPC peer-link port-channel num
Switch-A(config-if)# switchport mode trunk
Switch-A(config-if)# switchport trunk allowed vlan add vlan-id
Switch-A(config-if)# spanning-tree port type network
Switch-A(config-if)# vpc peer-link
Switch-A(config-if)# end
Administering the network
3. Type the following to configure the uplink port channels:
ToR
ToR
and
ToR
ToR
ToR
ToR
ToR
ToR
ToR
ToR
ToR
Switch-A#conf t
Switch-A(config)# interface port-channel port-channel num between customer switch
ToR switch
Switch-A(config-if)# switchport mode trunk
Switch-A(config-if)# spanning-tree port type edge trunk
Switch-A(config-if)# spanning-tree bpduguard enable
Switch-A(config-if)# spanning-tree guard root
Switch-A(config-if)# switchport trunk allowed vlan add vlan-id
Switch-A(config-if)# speed <speed>
Switch-A(config-if)# lacp vpc-convergence
Switch-A(config-if)# vpc <vpc id>
Switch-A(config-if)# end
4. Type the following to configure the server port channels for each access switch:
ToR
ToR
and
ToR
ToR
ToR
ToR
ToR
ToR
ToR
ToR
ToR
Switch-A#conf t
Switch-A(config)# interface port-channel port-channel num between customer switch
ToR switch
Switch-A(config-if)# switchport mode trunk
Switch-A(config-if)# spanning-tree port type edge trunk
Switch-A(config-if)# spanning-tree bpduguard enable
Switch-A(config-if)# spanning-tree guard root
Switch-A(config-if)# switchport trunk allowed vlan add vlan-id
Switch-A(config-if)# speed <speed>
Switch-A(config-if)# lacp vpc-convergence
Switch-A(config-if)# vpc <vpc id>
Switch-A(config-if)# end
5. Verify all port channel interfaces are in forwarding state for the added VLAN (see Desg FWD 1).
The following example shows the configuration for the server and peer-link port channel. There is no uplink port channel.
ToR Switch-A#show spanning-tree vlan <vlan-id>
<lines removed>
Interface Role Sts Cost Prio.Nbr Type
---------------- ---- --- --------- -------- ------------------------------Po50 Desg FWD 1 128.4145 (vPC peer-link) Network P2p
Po111 Desg FWD 1 128.4206 (vPC) Edge P2p
6. To enable NXAPI, type:
switch#conf t
switch(config)# feature nxapi
switch(config)#nxapi http port 80
Configuring spine switches
Configure the spine switches as part of adding a VLAN to a leaf-spine network.
Prerequisites
NOTE: Perform this procedure only if you are configuring a leaf-spine network design.
Steps
1. From the Cisco NX-OS switch CLI, enter the following for global configurations:
feature
feature
feature
feature
feature
ospf
bgp
lldp
fabric forwarding
nv overlay
Administering the network
43
nv overlay evpn
feature nxapi
2. Enter the following for QoS settings for fabric peer link:
qos copy policy-map type queuing default-8q-out-policy prefix my
class-map type qos match-all DSCP56
match dscp 56
policy-map type qos DSCP56
class DSCP56
set qos-group 7
policy-map type queuing my8q-out
class type queuing c-out-8q-q7
priority level 1
3. Enter the following for the interface configurations:
interface Ethernet<slot>/<port>
mtu 9216
medium p2p
ip unnumbered loopback0
no shutdown
interface mgmt0
vrf member management
ip address <ip address>/<subnet mask>
interface loopback <loopback number>
For 100 GB networks, enter the following for interface configurations:
interface Ethernet<slot>/<port>
mtu 9216
medium p2p
ip unnumbered loopback0
service-policy type qos input DSCP56
service-policy type queuing output cfs-traffic
no shutdown
interface mgmt0
vrf member management
ip address <ip address>/<subnetmask>
interface loopback <loopback number>
ip address <ip address><subnetmask>
4. Enter the following to configure the OSPF underlay:
router ospf <process id>
router-id <router-id>
interface Ethernet<slot>/<port>
ip ospf network point-to-point
ip router ospf <process id> area <area-id>
no shutdown
5. Enter the following to configure the Border Gateway Protocol (BGP):
router bgp <as-number>
router-id <router-id>
address-family ipv4 unicast
address-family l2vpn evpn
neighbor <neighbor-ip address>
remote-as < neighbor as-number>
update-source loopback <loopback-number>
address-family ipv4 unicast
address-family l2vpn evpn
send-community
44
Administering the network
send-community extended
route-reflector-client
6. Enter the following to configure the MP-BGP EVPN:
router bgp <as-number>
address-family l2vpn evpn
neighbor <neighbor-ip address>
remote-as <neighbor as-number>
address-family l2vpn evpn
send-community extended
route-reflector-client
7. To enable NXAPI, type:
switch#conf t
switch(config)# feature nxapi
switch(config)#nxapi http port 80
Configuring leaf switches
Use this task to configure leaf switches.
Prerequisites
NOTE: Perform this procedure only if you are configuring a leaf-spine network design. This applies to both vPC and
non-vPC configurations. Use the code that applies to your configuration.
Steps
1. From the Cisco NX-OS switch CLI, type the following for global configurations:
Vlan <vlan ID>
system jumbomtu 9216
feature ospf
feature vpc
#only for vPC scenario
feature lacp
feature bgp
feature fabric forwarding
feature interface-vlan
feature lldp
feature vn-segment-vlan-based
nv overlay evpn
feature nv overlay
fabric forwarding anycast-gateway-mac <mac address>
system nve infra-vlans <vlan id> #only for vPC scenario
hardware access-list tcam region in-flow-redirect 512 #Requires switch reload.
port-channel load-balance src-dst mac
2. Type the following for the interface configuration of management nodes with vPC:
vpc domain <domain id>
peer-switch
role priority <priority>
system-priority <priority>
peer-keepalive destination <Destination oob mgmt ip> source <Source oob mgmt ip>
virtual peer-link destination <ip> source <ip> dscp 56
delay restore 300
delay restore
orphan port 300
peer-gateway
layer3 peer-router
Administering the network
45
auto-recovery reload-delay 360
ip arp synchronize
interface port-channel <port channel number>
description “virtual port-channel vpc-peer- link”
switchport mode trunk
spanning-tree port type network
vpc peer-link
interface Ethernet<slot>/<port>
switchport
switchport mode trunk
spanning-tree port type edge trunk
switchport trunk allowed vlan add <vlan id>
spanning-tree bpduguard enable
spanning-tree guard root
speed <speed>
channel-group <channel group number> mode active
link debounce time 500
no shutdown
interface port-channel <port channel number>
switchport mode trunk
spanning-tree port type edge trunk
spanning-tree bpduguard enable
switchport trunk allowed vlan add <vlan id>
spanning-tree guard root
speed <speed>
lacp vpc-convergence
vpc <vpc id>
3. Type the following for the interface configuration of VMware ESXi PowerFlex compute-only nodes:
vpc domain <domain id>
peer-switch
role priority <priority>
system-priority <priority>
peer-keepalive destination <Destination oob mgmt ip> source <Source oob mgmt ip>
virtual peer-link destination <ip> source <ip> dscp 56
delay restore 300
delay restore
orphan port 300
peer-gateway
layer3 peer-router
auto-recovery reload-delay 360
ip arp synchronize
interface port-channel <port channel number>
description “virtual port-channel vpc-peer-link”
switchport mode trunk
spanning-tree port type network
vpc peer-link
interface Ethernet<slot>/<port>
switchport
switchport mode trunk
spanning-tree port type edge trunk
switchport trunk allowed vlan add <vlan id>
spanning-tree bpduguard enable
spanning-tree guard root
speed <speed>
channel-group <channel group number> mode active
link debounce time 500
no shutdown
interface port-channel <port channel number>
switchport mode trunk
spanning-tree port type edge trunk
spanning-tree bpduguard enable
switchport trunk allowed vlan add <vlan id>
46
Administering the network
spanning-tree guard root
speed <speed>
lacp vpc-convergence
vpc <vpc id>
4. Type the following for the interface configuration of PowerFlex storage-only nodes with vPC:
interface Ethernet<slot>/<port>
switchport
switchport mode trunk
spanning-tree port type edge trunk
spanning-tree bpduguard enable
spanning-tree guard root
speed <speed>
switchport trunk allowed vlan add <vlan id>
channel-group <channel-group number > mode active
link debounce time 500
no shutdown
interface port-channel <port channel number>
switchport mode trunk
spanning-tree port type edge trunk
spanning-tree bpduguard enable
spanning-tree guard root
speed <speed>
lacp vpc-convergence
switchport trunk allowed vlan add <vlan id>
vpc <vpc id>
5. Type the following for the interface configuration:
interface Ethernet<slot>/<port>
switchport
switchport mode trunk
spanning-tree port type edge trunk
spanning-tree bpduguard enable
spanning-tree guard root
speed <speed>
switchport trunk allowed vlan add <vlan id>
channel-group <channel-group number > mode active
link debounce time 500
no shutdown
interface port-channel <port channel number>
switchport mode trunk
spanning-tree port type edge trunk
spanning-tree bpduguard enable
spanning-tree guard root
speed <speed>
lacp vpc-convergence
switchport trunk allowed vlan add <vlan id>
vpc <vpc id>
6. Type the following to configure the interface from the leaf to spine switch:
interface Ethernet <slot>/<port> mtu 9216
medium p2p
port-type fabric
ip unnumbered loopback <loopback number> no shutdown
interface mgmt0
vrf member management
ip address <ip address>/<subnetmask>
interface loopback <loopback number> ip address <ip address>/<subnetmask>
ip address <ip address>/<subnetmask> secondary #only with vPC environment interface
loopback <loopback number>
ip address <ip address>/<subnetmask>
Administering the network
47
7. Type the following to configure the distributed gateway:
interface Vlan<vlan id>
no shutdown
interface Vlan <vlan-number>
vrf member <vrf-name>
fabric forwarding mode anycast-gateway
ip address <ip address>
no shutdown
interface Vlan <vlan-number>
vrf member <vrf-name>
fabric forwarding mode anycast-gateway
ip address <ip address>
no shutdown
8. Type the following to configure the OSPF underlay:
router ospf <process id>
router-id <router-id>
interface Ethernet <slot>/<port>
ip ospf network point-to-point
ip router ospf <process id> area <area id>
no shutdown
int lo <loopback number>
ip ospf network point-to-point
ip router ospf <process id> area <area id>
no shutdown
int lo <loopback number>
ip ospf network point-to-point
ip router ospf <process id> area <area id> no shutdown
9. Type the following to configure object tracking with vPC:
track 1 list boolean or
delay down 0 up 10
object 34
object 35
object 36
track
delay
track
delay
track
delay
34 interface Ethernet<slot>/<port> line-protocol
down 0 up 0
35 interface Ethernet<slot>/<port> line-protocol
down 0 up 0
36 interface Ethernet<slot>/<port> line-protocol
down 0 up 0
event manager applet Uplink-Down
event track 1 state down
action 1.1 cli end
action 1.2 cli config t
action 1.3 cli int e1/1-8
action 1.4 cli shut
event manager applet Uplink-Up
event track 1 state up
action 1.1 cli end
action 1.2 cli config t
action 1.3 cli int e1/1-8
action 1.4 cli no shut
10. Type the following to configure the BGP:
router bgp <as-number>
router-id <router-id>
address-family ipv4 unicast
address-family l2vpn evpn
48
Administering the network
neighbor <neighbor ip address>
remote-as < neighbor as-number>
update-source loopback<loopback-number>
address-family ipv4 unicast
send-community
send-community extended
address-family l2vpn evpn
send-community
send-community extended
<VRF Name>
address-family ipv4 unicast
advertise l2vpn evpn
redistribute direct route-map ALL
11. Type the following to configure the VxLAN:
vlan <vlan id>
name <vlan-name>
<vlan id>
vn-segment <segment-id>
vlan <vlan id>
name customer-vm --<vlan id>
vn-segment <segment-id>
vlan <vlan id>
name customer-vm -<vlan id>
vn-segment <segment-id>
vlan <vlan id>
name customer-vm -<vlan id>
vn-segment <segment-id>
vlan <vlan id>
name customer-vm -<vlan id>
vn-segment <segment-id>
vlan <vlan id>
name VxFLEX_Management_VRF
vn-segment <segment-id>
vlan <vlan id>
name Customer_Production_VRF
vn-segment <segment-id>
12. Type the following to configure the network virtualization endpoint (NVE) interface:
Interface nve <nve-interface>
no shutdown
source-interfce loopback <loopback-number>
host-reachability protocol bgp
member vni <vni-number>
suppress ARP
ingress-replication protocol bgp
member vni <vni number> associate-vrf
13. Type the following to configure the MP-BGP EVPN VTEP:
route-map <map-name>
router bgp <as-number>
address-family l2vpn evpn
neighbor <neighbor-ipaddress>
remote-as <neighbor as-number>
update-source loopback<loopback-number>
address-family l2vpn evpn
send-community extended
vrf VxFLEX_Management_VRF
Administering the network
49
address-family ipv4 unicast
advertise l2vpn evpn
redistribute direct route-map ALL
vrf Customer_Production_VRF
address-family ipv4 unicast
advertise l2vpn evpn
redistribute direct route-map ALL
14. Type the following to configure the EVPN layer-2 networks:
evpn
vni <segment id>
rd auto
route-target import auto
route-target export auto
! Create EVPN except SDS and SDC
VRF
vrf context VxFLEX_Management_VRF
vni <segment id>
rd auto
address-family ipv4 unicast
route-target both auto evpn
! Create EVPN for SDS and SDC VRF
Import all SDC VRF to SDS VRF, SDS VRF into Individual VRF.
vrf context <VRF_Name>
vni <vni-id>
rd auto
address-family ipv4 unicast
route-target both auto evpn
route-target import <RT>
route-target import <RT> evpn
route-target export <RT>
route-target export <RT> evpn
15. To enable NXAPI, type:
switch#conf t
switch(config)# feature nxapi
switch(config)#nxapi http port 80
Configuring border-leaf switches
Use this task to configure border leaf switches.
Prerequisites
NOTE: Perform this procedure only if you are configuring a leaf-spine network design.
Steps
1. From the Cisco NX-OS switch CLI, type the following for global configurations:
vlan <VLAN ID>
system jumbomtu 9216
feature nxapi
feature ospf
feature bgp
feature lldp
feature udld
feature interface-vlan
feature vn-segment-vlan-based
feature hsrp
feature lacp
50
Administering the network
feature vpc
feature fabric forwarding
feature nv overlay
nv overlay evpn
2. Type the following for the interface configuration for border-leaf to spine switch:
interface Ethernet<slot>/<port>
mtu 9216
medium p2p
ip unnumbered loopback<loopback-number>
link debounce 500
no shutdown
interface loopback <loopback number>
ip address <ip address><subnetmask>
ip address <ip address><subnetmask>secondary
3. Type the following to configure interface configuration for L3 handoff VRF Lite:
interface Ethernet<slot>/<port>
no switchport
no shutdown
interface Ethernet<slot>/<port>
vrf member VxFLEX_Management_VRF
encapsulation dot 1q <vlan id>
ip address <ip address>/<subnetmask>
no shutdown
interface Ethernet<slot>/<port>
vrf member Customer_Production_VRF
encapsulation dot 1q <vlan id>
ip address <ip address>/<subnetmask>
no shutdown
4. Type the following to configure interface for L2 handoff vpc:
vpc domain <domain id>
peer-switch
peer-keepalive destination <destination ip address> source <source ip address>
peer-gateway
interface Ethernet<slot>/<port>
switchport
switchport mode trunk
spanning-tree port type network
switchport trunk allowed vlan add <vlan id>
channel-group <channel group number>
no shutdown
interface port-channel<port channel number>
vpc peer-link
interface port-channel <port channel number>
switchport mode trunk
switchport trunk allowed vlan add <vlan id>
vpc <vpc number>
5. Type the following to configure the distributed gateway for L2 handoff:
interface Vlan<vlan id>
no shutdown
vrf member <VRF Name>
ip address <ip address><subnetmask>
fabric forwarding mode anycast-gateway
interface Vlan<vlan id>
no shutdown
Administering the network
51
vrf member Customer_Production_VRF
ip address <ip address><subnetmask>
fabric forwarding mode anycast-gateway
interface Vlan<vlan id>
no shutdown
vrf member VxFLEX_Management_VRF
ip forward
interface Vlan<vlan id>
no shutdown
vrf member Customer_Production_VRF
ip forward
6. Type the following to configure the OSPF underlay:
router ospf <process id>
router-id <router-id>
interface Ethernet<slot>/<port>
ip ospf network point-to-point
ip router ospf <process id > area <area id>
no shutdown
int lo <loopback number>
ip ospf network point-to-point
ip router ospf <process id > area <area id>
no shutdown
7. Type the following to configure the BGP:
ip prefix-list no-hosts-route seq 5 deny 0.0.0.0/0 eq 32
ip prefix-list no-hosts-route seq 10 permit 0.0.0.0/0 le 32
router bgp <as-number>
router-id <router-id>
address-family ipv4 unicast
address-family l2vpn evpn
address-family vpnv4 unicast
neighbor <neighbor-ip address>
remote-as <neighbor-as-number>
update-source loopback <loopback-number>
address-family ipv4 unicast
send-community
send-community extended
address-family l2vpn evpn
send-community
send-community extended
vrf <VRF Name>
address-family ipv4 unicast
advertise l2vpn evpn
neighbor <neighbor ip address>
remote-as <as number>
address-family ipv4 unicast
prefix-list no-hosts-route out
vrf Customer_Production_VRF
address-family ipv4 unicast
advertise l2vpn evpn
neighbor <neighbor ip address>
remote-as <as-number>
address-family ipv4 unicast
prefix-list no-hosts-route out
52
Administering the network
# for L3 handoff
# for L3 handoff
8. Type the following to configure the VxLAN:
VLAN <vlan ID>
name <vlan name>
vn-segment <vn-segment ID>
9. Type the following to configure the network virtualization endpoint (NVE) interface:
interface nve <nve-interface>
no shutdown
source-interfce loopback<loopbacknumber>
host-reachability protocol bgp
member vni <vni number>
suppress ARP
ingress-replication protocol bgp
member vni <vni number> associate-vrf
10. Type the following to configure the MP-BGP EVPN VTEP:
route-map <map-name> <sequence number>
router bgp <as-number>
address-family l2vpn evpn
neighbor <neighbor-ip address>
remote-as <neighbor-as-number>
update-source loopback<loopback-number>
address-family l2vpn evpn
send-community extended
vrf <VRF name>
address-family ipv4 unicast
advertise l2vpn evpn
neighbor <neighbor-ip address>
remote-as <neighbor as-number>
address-family ipv4 unicast
Prefix-list no-host-route out
vrf Customer_Production_VRF
address-family ipv4 unicast
advertise l2vpn evpn
neighbor <neighbor-ip address>
remote-as <as-number>
address-family ipv4 unicast
prefix-list no-host-route out
11. Type the following to configure the EVPN layer-2 networks:
evpn
VNI <vni number>
rd auto
route-target import auto
route-target export auto
! Create EVPN VxFLEX_Management_VRF
vrf <VRF name>
vni <vni-number>
rd auto
address-family ipv4 unicast
route-target both auto evpn.
Import all SDC VRF to SDS VRF, SDS VRF into Individual VRF.
vrf context <VRF_Name>
vni <vni-id>
rd auto
address-family ipv4 unicast
route-target both auto evpn
route-target import <RT>
route-target import <RT> evpn
Administering the network
53
route-target export <RT>
route-target export <RT> evpn
12. To enable NXAPI, type:
switch#conf t
switch(config)# feature nxapi
switch(config)#nxapi http port 80
Adding a distributed port group to the production compute virtual distributed switch
Add a distributed port group to the production compute virtual distributed switch as part of adding a VLAN to the network.
About this task
For non-bonding NIC port design, PowerFlex rack supports three virtual distributed switches (vDS). For static bonding and
LACP bonding NIC port design, PowerFlex rack supports two virtual distributed switches (vDS). Add a distributed port group to
the production PowerFlex compute-only node vDS.
Steps
1. Log in to VMware vSphere Client Web Server.
2. Identify the production PowerFlex compute-only node vDS.
3. Right-click, and select Production Cluster vDS > Distributed Port Group > New Distributed Port Group....
4. Add the distributed port-group name and click Next.
5. Select the VLAN Type as VLAN and enter the VLAN ID.
6. Click Next.
7. Verify the port group name and VLAN ID, and click Finish.
8. Right-click Port Group > Edit Settings....
9. Select Teaming and failover > Route based on IP hash.
Verify the network configuration
Use this procedure to verify configuration before adding a VLAN to the network.
Steps
1. Verify the Virtual Link Tunneling (VLT) is available between the access switches.
2. Verify the port channel between servers, hosts, and access switches are configured.
3. Verify the port channel between customer network switches and access switches are configured.
4. Verify that the VLAN is created and enabled on the port channel between the network switches and the access switches.
5. Verify that SVI is configured on the network switches.
6. Verify that you have one VLAN IP address, mask, and gateway to verify the VLAN configuration.
Identify the Dell EMC PowerSwitch access switches
Identify the Dell EMC PowerSwitch access switches as part of adding a VLAN to the Dell EMC network.
Identify the servers in the production PowerFlex nodes and the corresponding access switches to which they connect.
54
Administering the network
Configure the Dell EMC PowerSwitch access switches
Configure the Dell EMC PowerSwitch S5248F-ON access switches.
Prerequisites
Ensure you can log in to the Dell OS switch CLI.
Steps
1. For global configurations, type the following in the Dell OS switch CLI:
ip vrf management
interface management
interface vlan <vlan id >
hostname <name of device>
ip vrf default
Administering the network
55
no ip igmp snooping enable
no multicast snooping flood-restrict
iscsi target port <priority number>
iscsi target port <priority number>
spanning-tree mode rstp
class-map type application class-iscsi
policy-map type application policy-iscsi
2. For interface configurations, type the following:
Access port configuration :
interface Ethernet <slot>/<port>
switchport access vlan <vlan id>
no shutdown
Trunk port configuration :
interface Ethernet <slot>/<port>
switchport mode trunk
switchport trunk allowed vlan [vlan]
no shutdown
Management port configuration :
interface mgmt1/1/1
no shutdown
no ip address dhcp
ip address <ip address>
vlan configuration :
interface vlan<vlan id>
description <name>
no shutdown
port channel interface configuration :
interface ethernet<slot>/<port>
description <name>
no shutdown
channel-group <channel number> mode active
no switchport
flowcontrol receive off
flowcontrol transmit off
3. For management route configurations, type the following:
management route 0.0.0.0/0 <gateway ip>
4. For port channel configurations, type the following:
interface port-channel<port channel number>
description<Uplink-Port-Channel-to-AGG>
no shutdown
switchport mode trunk
switchport trunk allowed vlan <vlan id>
mtu 9216
vlt-port-channel <port channel number>
interface port-channel<port channel number>
description <Downlink-Port-Channel-to-Servers>
no shutdown
switchport mode trunk
switchport trunk allowed vlan <vlan id>
mtu 9216
vlt-port-channel <port channel number>
spanning-tree port type edge
56
Administering the network
5. For VLT configurations, type the following:
vlt-domain <domain id>
backup destination <management ip address of peer device>
discovery-interface ethernet <1/1/x-1/1/x >
peer-routing
primary-priority <priority number>
vlt-mac <virtual mac address>
Configure the Dell EMC PowerSwitch aggregation switches
Configure the Dell EMC PowerSwitch S5232F-ON aggregation switches.
Prerequisites
Ensure you can log in to the Dell OS switch CLI.
Steps
1. For global configurations, type the following in the Dell OS switch CLI:
ip vrf management
interface management
interface vlan <vlan id>
hostname <name of device>
ip vrf default
no ip igmp snooping enable
no multicast snooping flood-restrict
iscsi target port 860
iscsi target port 3260
spanning-tree mode rstp
spanning-tree rstp priority <priority number>
class-map type application class-iscsi
policy-map type application policy-iscsi
2. For interface configurations, type the following:
Access port configuration :
interface Ethernet <slot>/<port>
switchport access vlan <vlan id>
no shutdown
Trunk port configuration :
interface Ethernet <slot>/<port>
switchport mode trunk
switchport trunk allowed vlan <vlan id>
no shutdown
Management port configuration :
interface mgmt1/1/1
no shutdown
no ip address dhcp
ip address <ip address>
ipv6 address autoconfig
vlan configuration :
interface vlan<vlan id>
description <name>
no shutdown
ip address <ip address>
port channel interface configuration :
interface ethernet <slot>/<port>
Administering the network
57
description <name>
no shutdown
channel-group <channel number> mode active
no switchport
flowcontrol receive off
flowcontrol transmit off
3. For management route configurations, type the following:
management route 0.0.0.0/0 <gateway ip>
4. For port channel configurations, type the following:
interface port-channel<channel number>
no shutdown
switchport mode trunk
switchport access vlan <vlan id>
switchport trunk allowed vlan <vlan id’s>
vlt-port-channel <channel number>
5. For VLT configurations, type the following:
vlt-domain <domain id>
backup destination <management ip address of peer device>
discovery-interface ethernet <1/1/x-1/1-x>
peer-routing
primary-priority <priority number>
vlt-mac <virtual mac address>
6. For VRRP configurations, type the following:
interface vlan<vlan id>
description <description>
no shutdown
ip address <ipaddress>/<subnetmask>
vrrp-group <vrrp id>
virtual-address <virtual ipaddress>
Configure the PowerFlex rack service port for a Dell EMC PowerSwitch management
switch
Configure the Dell EMC PowerSwitch S4148T-ON management switch.
Prerequisites
Ensure that you have the following:
● Access and credentials to the flex-oob-mgmt-<vlanid> switch by network or console
● Access and credentials to the single VMware vCenter
● Access and credentials to the management controller jump server
● One labeled CAT6 patch cable to provide a permanent service port connection to the Dell EMC PowerSwitch S4148T-ON
switch
● Access and credentials to the Dell-OS switch CLI
The default network settings that are used in the service port configuration are as follows:
Subnet address
Netmask
Customer support laptop IP
address
Jump server IP address
172.16.255.248/30
255.255.255.252
172.16.255.249
172.16.255.250
58
Administering the network
Steps
For management configurations, type the following in the Dell OS CLI:
int e1/48
Description Dell Flex Engineering Service Port DO NOT DISABLE
switchport access vlan <vlan id>
flowcontrol receive off
flowcontrol transmit off
spanning-tree port type edge
no shutdown
Add a network interface card to the service port
Use this procedure to add a NIC to the service port.
Steps
1. Log in to the single VMware vCenter.
2. Click Management Cluster.
3. Right-click the jump server, and select Edit settings.
4. Click Select > Network > Add and set the network to flex-oob-mgmt-<vlanid>. The default VLAN is 101.
5. Click OK.
Configure the Windows-based jump server to use the service port
Use this procedure to configure the Windows-based jump server to use the service port.
Steps
1. Log in to the jump server VM using RDP or VMware console in vCenter.
2. Open the Control Panel and double-click Network and Sharing Center.
3. Click Change Adapter Settings on the navigation bar.
4. Select and rename the new interface Flex-Support-VLAN###, where ### is flex-oob-mgmt-<vlanid>. Use the flex-oobmgmt-<vlanid> value.
5. Configure the IP address on the new interface:
IP Address - <>
Netmask <>
No default gateway
6. Save and close.
Administering the network
59
Configure the embedded operating system-based jump server to use the service port
Use this procedure to configure the embedded operating system-based jump server to use the service port.
Steps
1. Log in to the jump server VM using VNC or VMware console in vCenter.
2. Open the System Settings and double-click Network Settings.
3. Click the Wired tab, and select the new NIC.
For example, ens224.
4. From the IPv4 Address tab, change Method to Manual.
5. Type the following to configure the IP on the new interface:
IP Address - <>
Netmask <>
No default gateway
6. Save and close.
Creating a network interface card team for Windows Server 2016
or 2019
NIC teaming enables grouping between physical ethernet network adapters into one or more software-based virtual network
adapters. These virtual network adapters provide improved performance and fault tolerance during a network adapter failure.
Perform either of the following procedures to create a NIC team for Windows Server 2016 or 2019:
● Create a NIC team for Windows Server 2016 or 2019
● Create a NIC team for Windows Server 2016 or 2019 using PowerShell
Create a network interface card team for Windows Server 2016 or 2019
Perform this procedure to create a network interface card (NIC) team for Windows Server 2016 or 2019 using the Server
Manager.
Steps
1. On the taskbar, click Start > Server Manager.
2. In Server Manager, click Local Server.
3. In the Properties pane, locate NIC Teaming, then click Disabled on the right. The NIC Teaming dialog box opens.
4. In Adapters and Interfaces, select the network adapters to add to a NIC team.
5. Click TASKS near TEAMS and click Add to New Team.
The NIC Teaming dialog box opens and displays network adapters and team members.
6. Enter a name for the new NIC team in the text box for Team name:.
7. If needed, in Additional Properties, select values for Teaming mode:, Load balancing mode:, and Standby adapter:. In
most cases, the highest-performing load-balancing mode is Dynamic.
8. If you want to configure or assign a VLAN number to the NIC team, click the link to the right of Primary team interface:.
The New Team interface dialog box opens.
9. To configure VLAN membership, click Specific VLAN. Enter the VLAN information in the first section of the dialog box.
10. Click OK to close the dialog box after entering the VLAN information.
11. Click OK to close the NIC Teaming dialog box.
60
Administering the network
Create a network interface card team for Windows Server 2016 or 2019
using PowerShell
Perform this procedure to create a NIC team for Windows Server 2016 or 2019 using a PowerShell prompt.
Steps
1. Open an elevated PowerShell prompt.
2. If you are presented with the User Account Control prompt, click Yes.
3. Enter Get-NetAdapter to view the list of network adapters.
4. Enter new-NetLBFOTeam -Name [TEAMNAME] -TeamMembers "[NIC2]", "[Slot 4 Port 2]" where:
Option
Description
[TEAMNAME]
Team name for network adapters
[NIC2]
Name of the first network adapter found in the list of network adapters
[Slot 4 Port 2]
Name of the second network adapter found in the list of network adapters
5. Enter Y to confirm that you want to perform this action.
Administering the network
61
6
Administering the storage
Perform the following procedures to administer the PowerFlex rack storage.
Considerations
When administering storage in a hyperconverged deployment, observe the following considerations:
● Always put a Storage Data Server (SDS) into PowerFlex maintenance mode using the PowerFlex GUI management software
before putting the VMware ESXi host into maintenance mode in VMware vSphere.
● Ensure no more than one PowerFlex controller node server is offline at any given time.
● Ensure no more than one host – Storage VM (SVM), VMware ESXi, Red Hat Enterprise Linux, or an embedded operating
system – is in maintenance mode at any given time.
● Ensure PowerFlex data networks (data 1, data 2, data 3, and data 4) are not routable.
● Ensure that you have firewall details regarding what ports/protocols are open before performing administrative procedures.
● When unmapping a volume using the PowerFlex GUI management software, clear the hosts that you do not want to remove
the volume from by clicking Configuration > Volumes.
● While storage I/O control (SIOC), storage I/O statistics sollection (SIOSC) and network I/O control (NIOC) are useful for
vSAN environments, their implementation may actually cause significant issues in a PowerFlex environment, so use of these
options is not supported.
PowerFlex provides integrated capabilities to limit network bandwidth and I/O operations per second (IOPS) limits for each
volume for each SDC. For more information, see Configure and Customize Dell EMC PowerFlex.
PowerFlex management controller datastore and
virtual machine details
The following table explains which datastores to use:
Controller
type
Volume
name
PowerFlex
management
controller 1.0
PowerFlex
management
controller 2.0
Size (GB)
VMs
Domain name
Storage pool
vsan_datastor All available capacity
e
All
N/A
N/A
vcsa
3500
pfmc_vcsa
PFMC
PFMC-pool
general
1600
Management VMs
PFMC
PFMC-pool
PFMC
PFMC-pool
For example:
● Management
gateway
● Customer gateway
● Presentation
server
● CloudLink
● Additional VMs
pfxm
1000
PowerFlex Manager
NOTE: For PowerFlex management controller 2.0, verify the capacity before adding additional VMs to the general volume.
If there is not enough capacity, expand the volume before proceeding.
62
Administering the storage
Viewing PowerFlex Gateway details
You can view performance metrics and storage details for PowerFlex Gateway.
About this task
PowerFlex Manager stores up to 15 GB of PowerFlex Gateway metrics. Once this threshold is exceeded, PowerFlex Manager
automatically purges the oldest data to free up space.
Steps
1. On the menu bar, click Resources.
2. On the All Resources tab, click a PowerFlex Gateway resource from the list to view its details.
The right pane displays basic information about the resource, such as the unused and spare space for PowerFlex. It also
shows the number of protection domains, volumes, Storage Data Clients (SDC), and Storage Data Servers (SDS).
3. In the right pane, click View Details.
The Details page displays detailed information about the PowerFlex Gateway resource on the following tabs:
● Performance: Displays metrics for system, volume, SDS, SDC, and storage pool. These metrics include the read, write,
and total I/O operations per second (IOPS) and read, write, and total bandwidth.
● Storage: Displays the protection domain, storage pools, volumes, and mapped Storage Data Clients.
● Nodes: Displays the protection domain, type (SDS or SDC), connection, name, IP address for the SDS or SDC, and fault
set.
The Details page also allows you to launch the PowerFlex Installer, launch a wizard to see the status of a recent upgrade,
launch a wizard to reconfigure the MDM roles in the PowerFlex cluster, or migrate the PowerFlex Gateway.
4. Click the Performance tab. In the Total IOPS and Bandwidth sections, you can filter the display by selecting any of the
following options:
● Choose to view graphs showing data for the last hour, day, week, month, or year.
● Choose to view total IOPS for system, volume, SDS, or SDC. You can then select the specific item to view from the
drop-down list. If the volume, SDS, or SDC name is not defined, the drop-down list displays the instance ID.
Current IOPS and bandwidth (KB/S) always display in the top-left corner. The data is automatically refreshed every thirty
seconds. Performance data is available when PowerFlex is deployed to a service.
To export all metrics that are collected for the PowerFlex Gateway to a CSV file, click Export All.
5. Click the Storage tab. For each storage pool within a protection domain, PowerFlex Manager shows the granularity setting
and acceleration pool. For each volume within a storage pool, PowerFlex Manager shows the Compression and Type
settings.
For a storage pool that has compression disabled, the granularity is set to medium. For a storage pool that has compression
enabled, the granularity is set to fine.
To search for volumes associated with the gateway:
a.
b.
c.
d.
e.
f.
Enter a volume or datastore name search string in the Search Text box.
Optionally, select a Size range.
Optionally, Thick or Thin for the Type.
Optionally, select Enabled or Disabled for the Compression setting.
Optionally, select a specific Storage Pool.
Click Search.
PowerFlex Manager updates the results to show only those volumes that satisfy the search criteria. If the search returns
more than 50 volumes, you must refine the search criteria to return only 50 volumes.
Administering the CloudLink Center
Adding and managing CloudLink Center licenses
Perform the following procedures to add CloudLink Center licenses and manage CloudLink Center licenses through PowerFlex
Manager.
Administering the storage
63
License CloudLink Center
Use this procedure to add licenses to CloudLink Center.
About this task
CloudLink license files determine the number of machine instances, CPU sockets, encrypted storage capacity, or physical
machines with self-encrypting drives (SEDs) that your organization can manage using CloudLink Center. License files also define
the CloudLink Center usage duration.
NOTE: CloudLink center can act as a key management interoperability protocol (KMIP) server if you upload a KMIP license
to it.
Steps
1. Log in to CloudLink Center.
2. Select System > License.
3. Click Upload License.
4. Browse to the license file and click Upload.
NOTE: If the CloudLink environment is managed by PowerFlex Manager, after you update the license, go to the
Resources page, select the CloudLink VMs, and click Run Inventory.
Add the CloudLink Center license in PowerFlex Manager
Use this procedure to add CloudLink Center in PowerFlex Manager.
Steps
1. Log in to PowerFlex Manager.
2. Click Settings > Software Licenses, and click Add.
3. Click Choose File, and browse the license file.
4. Select Type as CloudLink, and click Save.
5. From Resource, select the CloudLink VMs, and click Run inventory.
Delete expired or unused CloudLink Center licenses from PowerFlex
Manager
Use this procedure to delete expired or unused CloudLink Center licenses from PowerFlex Manager.
Steps
1. Log in to PowerFlex Manager.
2. Click Settings > Software Licenses.
3. Select the license you want to delete, and click Delete.
4. From Resource, select the CloudLink VMs, and click Run inventory.
Configure custom syslog message format
Use this procedure to configure the custom syslog message format.
Steps
1. Log in to CloudLink Center.
2. Click Server > Change Syslog Format. The Change Syslog Format dialog box is displayed.
3. From the Syslog Format list, select Custom.
4. Enter the string for the syslog entry, and click Change.
64
Administering the storage
Manage a self-encrypting drive (SED) from CloudLink Center
Use this procedure to manage an SED device through CloudLink Center.
About this task
When managing SEDs from CloudLink Center, be aware of the following:
● CloudLink Center can manage encryption keys for self-encrypting drives (SEDs).
● Managing SEDs with CloudLink Center is functional when the CloudLink agent is installed on machines with SEDs.
● When managed by CloudLink Center, SED encryption keys are stored in the current keystore for the machine group they are
in.
● The functionality for managing SEDs requires a separate SED license.
● If the SED cannot retrieve the key from CloudLink Center, the SED remains locked.
Steps
From the CloudLink Center, select Agent > Machines, click Actions and select Manage SED. Ownership of the encryption key
is enabled.
NOTE: This option is only available if an SED license is uploaded and an SED is detected in the physical machine managed
by CloudLink Center. The Manage SED option does not change data on an SED it only takes ownership of the encryption
key.
Manage a self-encrypting drive from the command line
As an alternative to CloudLink Center, use the command line to manage an SED.
Steps
1. Log in to the Storage Data Servers (SDS).
2. To manage the SED from the command line, type svm manage [device name].
For example, svm manage /dev/sdb.
Administering the storage
65
Release a self-encrypting drive
Use this procedure to release an SED that is managed by CloudLink.
About this task
This option allows you to release ownership of an SED that is managed by CloudLink. This option is only available if an SED
license is uploaded and an SED is detected in the physical machine managed by CloudLink Center.
When CloudLink releases an SED, the encryption key is released in CloudLink Center.
Steps
1. From CloudLink Center, go to Agents > Machines and select SDS Machine. Click Release SED.
2. From RELEASE SED, use the menu to select the SED drive that you want to release and click Release.
The status of the SED drive changes to Releasing Control.
66
Administering the storage
Once CloudLink releases the control, the SED device status shows as Unmanaged.
NOTE: The Release SED option does not change any data on the SED.
Release management of a self-encrypting drive from the command
line
Use this procedure to release an SED using the command line.
Steps
1. Log in to the Storage Data Server (SDS).
2. To release the SED from the command line, type svm release [device name].
For example, svm release /dev/sdb.
Configure the PowerFlex compute-only nodes outside
of PowerFlex Manager
Use this procedure to configure static routing on PowerFlex compute-only nodes for multi-tenancy and routed reachability from
SDC to SDS.
Steps
1. Start an SSH session to the PowerFlex compute-only node using PuTTY.
2. Log in as root.
3. In the PowerFlex CLI, type esxcli network ip route ipv4 add -g <gateway> -n <destination subnet
in CIDR>.
For example, esxcli network ip route ipv4 add -g 192.168.61.1 -n 192.168.60.0/24.
Administering the storage
67
Administering the storage outside of PowerFlex
Manager
Configuring DNS
You can configure or modify DNS settings after deployment.
Steps
1. Open an SSH session with the Storage VM (SVM) using PuTTY or a similar SSH client, and log in as root.
2. In the PowerFlex CLI, type cat /etc/resolv.conf to access the DNS and domain details.
3. Type vi /etc/resolv.conf and edit the following:
● search: domain name
● nameserver: DNS server IP
NOTE: Ensure the SVM details are updated in the DNS server to resolve from name to IP and IP to name.
4. Type nslookup SVM IP to verify the configuration.
Configure PowerFlex storage-only nodes and PowerFlex
hyperconverged nodes with static route for SDC reachability
Use this procedure to configure PowerFlex storage-only nodes and PowerFlex hyperconverged nodes for mulit-tenancy and
routed reachability from SDS to SDC.
Steps
1. Start an SSH session to the PowerFlex storage-only node or PowerFlex hyperconverged node using PuTTY.
2. Log in as root.
3. In the PowerFlex CLI, type echo "<destination subnet> via <gateway> dev <SIO Interface>">route<SIO Interface>.
For example, echo "192.168.61.0/24 via 192.168.60.1 dev p2p2">route-p2p2.
Retrieving PowerFlex performance metrics
Retrieving PowerFlex performance metrics using the PowerFlex GUI
Use this procedure to retrieve PowerFlex performance metrics using the PowerFlex GUI.
Prerequisites
Use a standard tool to generate simulated IOPS. A simple way to do this is to load a Linux VM and use flexible I/O Tests (fio) to
generate IOPS. Following, is the command line using fio to generate random reads and writes:
fio --name=randrw --rw=randrw --direct=1 --ioengine=libaio --bs=16k --numjobs=8 -rwmixread=90 -- size=1G --runtime=600 --group_reporting
Steps
1. To retrieve overall performance metrics:
a. Launch the PowerFlex GUI.
b. In the Dashboard, look at the PERFORMANCE data.
68
Administering the storage
c. The Dashboard displays the following:
● Overall system IOPs
● Overall system bandwidth
● Overall system latency
2. To retrieve volume-specific metrics:
a. Launch the PowerFlex GUI.
b. In the Dashboard, select CONFIGURATION > Volumes.
3. To retrieve SDS-specific metrics:
a. Launch the PowerFlex GUI.
b. In the Dashboard, Select CONFIGURATION > SDSs.
Retrieving PowerFlex performance metrics using a PowerFlex version prior
to 3.5
Use this procedure to retrieve PowerFlex performance metrics for a PowerFlex version prior to 3.5.
Prerequisites
Use a standard tool to generate simulated IOPS. A simple way to do this is to load a Linux VM and use flexible I/O Tests (fio) to
generate IOPS. Following, is the command line using fio to generate random reads and writes:
fio --name=randrw --rw=randrw --direct=1 --ioengine=libaio --bs=16k --numjobs=8 -rwmixread=90 -- size=1G --runtime=600 --group_reporting
Steps
1. To retrieve overall performance metrics:
a. Launch the PowerFlex GUI.
b. In the Dashboard, look at the IO Workload page.
c. The Dashboard displays the following:
●
●
●
●
Overall system IOPs
Overall system bandwidth
Read/write statistics
Average I/O size
2. To retrieve volume-specific metrics:
a. Select Frontend > Volumes.
b. Select a volume and click the Property Sheet icon.
c. The volume performance metrics are displayed in the General section of the Volume Properties pane.
3. To retrieve host-specific metrics:
a. Select Frontend > SDCs.
b. Select a host, and click the Property Sheet icon.
c. The host performance metrics are displayed in the General section of the Host SDC Properties pane.
Administering the storage
69
Verifying VMware vSphere host settings
Perform the following procedures to verify VMware vSphere vStorage APIs for array integration (VAAI) and atomic test and set
(ATS) settings prior to creating PowerFlex datastores.
Verifying VMware vSphere vStorage APIs for array integration settings
Prior to creating any datastores, verify the following vStorage APIs for array integration (VAAI) settings across all hosts in a
vSphere cluster. Failure to do so can lead to data outages.
Steps
1. Open an SSH session with the VMware ESXi host using PuTTY or a similar SSH client.
2. Log in as root.
3. From the CLI, type the following:
esxcli system settings advanced list --option=/VMFS3/HardwareAcceleratedLocking
4. Verify HardwareAcceleratedLocking is set to 1, indicating it is enabled and that the host can use atomic test and set
(ATS) locking.
5. Type the following:
esxcli system settings advanced list -o /VMFS3/UseATSForHBonVMFS5
6. Verify UseATSForHBOnVMFS5 is set to 1, indicating it is enabled and that the host uses ATS for VMFS5 heartbeat.
Verifying atomic test and set settings
Prior to creating any datastores, verify the atomic test and set settings. Failure to do so can lead to data outages.
Steps
1. Open an SSH session with the VMware ESXi host using PuTTY or a similar SSH client.
2. From the CLI, type esxcli storage vmfs lockmode list.
3. Verify that all ATS-capable datastores are configured with locking mode set to ATS (not ATS+SCSI).
Configuring compression on PowerFlex storage-only nodes with
NVDIMMs
Identify NVDIMM acceleration pool in a protection domain
Use this procedure to identify a protection domain that is configured with NVDIMM acceleration pool using the PowerFlex GUI.
NVDIMM acceleration pools are required for compression.
Steps
1. Log in to the PowerFlex GUI presentation server as an administrative user.
2. Click Configuration > Acceleration Pool.
3. Note the acceleration pool name. The name is required while creating a compression storage pool.
70
Administering the storage
Identify NVDIMM acceleration pool in a protection domain using a PowerFlex
version prior to 3.5
Use this procedure to identify a protection domain that is configured with NVDIMM acceleration pool using a PowerFlex version
prior to 3.5. NVDIMM acceleration pools are required for compression.
Steps
1. Log in to the PowerFlex GUI as an administrative user.
2. Select Backend > Storage.
3. Filter By Storage Pools.
4. Expand the SDSs in the protection domains. Under the Acceleration Type column, identify the protection domain with Fine
Granularity Layout. This is a protection domain that has been configured with NVDIMM accelerated devices.
5. The acceleration pool name (in this example, AP1) is listed under the column Accelerated On. This is needed when creating
a compression storage pool.
Enable fine granularity metadata read cache using the command line
Use this procedure to enable metadata read cache for fine granularity storage pools for PowerFlex versions 3.5 and later in the
customer cluster.
About this task
● PowerFlex Manager versions 3.8 and higher will enable this for new fine granularity storage pools.
● To determine the size, use the following formula: FGMC_RAM_in_GiB = (Total_Drive_Capacity_in_TiB / 2) * 4 *
Compression_Factor * Percent_of_Metadata_to_Cache, if less than or equal to 32 GB.
Steps
1. Access the primary MDM:
a. In a hyperconverged deployment, use SSH to connect to the SVM that is acting as the primary MDM.
b. In a two-layer deployment, use SSH to connect to the PowerFlex storage-only node that is acting as the primary MDM.
2. From the PowerFlex CLI, type:
scli --login --username admin --password MDM_password
scli --set_default_fgl_metadata_cache_size --protection_domain_name <Protection Domain
Name> --metadata_cache_size_mb <FGMC_RAM_in_GiB>
scli --enable_fgl_metadata_cache --protection_domain_name <Protection Domain Name>
3. For each SDS with fine granularity storage pools, type scli -set_fgl_metadata_cache_size -sds_id <ID>
--metadata_cache_size_mb <FGMC_RAM_in_GiB>.
Create a compression storage pool using the PowerFlex GUI presentation
server
A storage pool is a group of devices within a protection domain. Use this procedure to create a compression storage pool in the
PowerFlex GUI presentation server.
Prerequisites
Identify a protection domain that has NVDIMM acceleration configured. See Identify NVDIMM acceleration pool in a protection
domain for more information.
About this task
Use this procedure to create a fine granularity storage pool, which enables for compression.
Steps
1. Log in to the PowerFlex GUI presentation server.
Administering the storage
71
2. Click Configuration > Storage Pool.
3. Add the following new storage pool details:
●
●
●
●
Name: Provide Name
Media Type: SSD
Data Layout: Fine Granularity
Acceleration Pool: Select Acceleration Pool
4. Click Add Storage Pool.
Related information
Identify NVDIMM acceleration pool in a protection domain
Create a compression storage pool using the PowerFlex GUI
A storage pool is a group of devices within a protection domain. Use this procedure to create a compression storage pool with a
PowerFlex version prior to 3.5.
Prerequisites
Identify a protection domain that has NVDIMM acceleration configured. See Identify NVDIMM acceleration pool in a protection
domain for more information.
About this task
Use this procedure to create a fine granularity storage pool, which enables for compression.
Steps
1. Log in to PowerFlex GUI.
2. Select Backend > Storage.
3. Right-click the protection domain that is configured by NVDIMM acceleration and select Add Storage Pool.
72
Administering the storage
4. Add the following new storage pool details:
●
●
●
●
●
Name: Provide Name
Media Type: SSD
Data Layout: Fine Granularity
Acceleration Pool: Select Acceleration Pool
Fine Granularity: Enable Compression
Administering the storage
73
5. Click OK and click Close.
Related information
Identify NVDIMM acceleration pool in a protection domain using a PowerFlex version prior to 3.5
Create a compression volume using the PowerFlex GUI presentation server
Use this procedure to create a compressed volume with the PowerFlex GUI presentation server.
Steps
1. Log in to the PowerFlex GUI presentation server.
2. Click Configuration > Volumes > Add.
3. Enter the volume details:
● Number of volumes: Enter the number of volumes
74
Administering the storage
●
●
●
●
Volume Name: Enter a name to describe the volume
Size: Desired volume size in GB
Provisioning: Select Thin or Thick
Storage Pool: Select Fine Granularity Storage Pool
4. Click Add Volume.
5. Select the volume, and click Mapping > Map. Map to all the desired hosts.
6. Click Map, and click Apply.
Create a compression volume using PowerFlex GUI
Use this procedure to create a compressed volume with a version prior to PowerFlex 3.5.
Steps
1. Log in to PowerFlex GUI.
2. Select Frontend > Volumes.
3. Right-click a fine granularity storage pool and select Add Volume (fine granularity storage pools are identified with Ⓕ).
Administering the storage
75
4. Add Volume Details:
● Name: Volume Name
● Size: Desired Volume Size
● Enable Compression
76
Administering the storage
5. Click OK and click Close.
6. Right-click volume and select Map.
7. Map to all desired hosts.
8. Click Map Volumes.
Administering the storage
77
9. Click Close.
Verify the VMware ESXi host recognizes the NVDIMM
Use this procedure to verify that the NVDIMM is recognized by the PowerFlex.
Prerequisites
Ensure that the following prerequisites are met:
●
●
●
●
VMware ESXi host and the VMware vCenter server have VMware ESXi 6.7.
The VM version of the SVM is version 14 or higher.
NVDIMM firmware is version 9324 or higher.
VMware ESXi host recognizes the NVDIMM.
Steps
1. Log in to the VMware vCenter or single vCenter (customer cluster).
2. Select the VMware ESXi host.
3. Go to the Summary tab.
4. In the Hardware section, verify that the required amount of persistent memory is listed.
78
Administering the storage
Add NVDIMM to PowerFlex
Use this procedure to add an NVDIMM to PowerFlex.
About this task
NOTE: If the node does not have NVDIMM, skip this procedure.
Steps
1. Using the PowerFlex GUI, enter the SDS into maintenance mode.
a. Select the SDS, and click MORE.
b. Click Enter Maintenance Mode > Protected > Enter Maintenance Mode.
2. Use VMware vCenter or single vCenter (customer cluster) to shut down the SVM.
3. Add the NVDIMM device to the SVM:
a.
b.
c.
d.
Edit the SVM settings.
Add an NVDIMM device.
Set the required size of the NVDIMM device.
Click OK.
Administering the storage
79
4. Increase the RAM size according to the following node configuration table:
Configuration
FG
capacity
NVDIMM
capacity
required
NVDIMM
capacity
delivered
(minimum
16 GB units
- must add
in pairs)
Required FG Additional
RAM
services
capacity
memory
R640/R650 with
10x 960 GB disks
9.3 TB
8 GB
2 x 16 GB =
32 GB
17 GB
MDM: 5.4 GB 25 GB
29 GB
R640/R650 with
10x 1.92 TB disks
1.92 TB
15 GB
2 x 16 GB =
32 GB
22 GB
LIA: 350 MB
30 GB
34 GB
R640/R650 with
10x 3.84 TB disks
3.84 TB
28 GB
2 x 16 GB =
32 GB
32 GB
Operating
system base:
1G
40 GB
44 GB
R740xd/R750
22.5 TB
and R840 with
24x 960 GB disks
18 GB
2 x 16 GB =
32 GB
25 GB
Buffer: 1 GB
32 GB
36 GB
R740xd/R750
and R840 with
24x 1.92 TB disks
34 GB
4 x 16 GB =
64 GB
38 GB
CloudLink: 4
GB
46 GB
50 GB
66 GB
6 x 16 GB =
96 GB
62 GB
(8 GB
without CL 12 GB with
CL)
70 GB
74 GB
46.08 GB
R740xd/R750
92.16 TB
and R840 with
24x 3.84 TB disks
Total RAM
required in
the SVM (no
CloudLink)
Total RAM
required in
the SVM
(with
CloudLink)
5. Power on the SVM from VMware vCenter or single vCenter (customer cluster).
6. Using the PowerFlex GUI, exit the SDS from maintenance mode.
7. Create a namespace on the NVDIMM:
a. Use SSH to connect to the SVM.
b. Type ndctl create-namespace -f -e namespace0.0 --mode=dax --align=4K.
8. Repeat these steps for each PowerFlex node with NVDIMMs.
9. Add the NVDIMM devices on the acceleration pool:
a. Use SSH to connect to the primary MDM.
b. For each SDS with NVDIMM, add the NVDIMM devices to the acceleration pool, type scli --add_sds_device
--sds_name <SDS_NAME> --device_path /dev/daxX.0 --acceleration_pool_name <ACCP_NAME>
--force_device_takeover.
For SDS name, in the PowerFlex GUI, check for the name of the acceleration pool.
80
Administering the storage
Migrate vCLS VMs for PowerFlex hyperconverged nodes and
PowerFlex compute-only nodes
Use this procedure to migrate the vCLS VMs.
About this task
The VMware vSphere 7.0 update creates vCLS VMs. When the hosts are added to the cluster, these VMs are automatically
populated. Each cluster consists of three VMs. No changes should be made on these VMs.
Steps
1. Go to the VMs using the VMs and Templates view or click vCenter Server Extentions > vSphere ESX Agent Manager
> VMs.
2. Click the vCLS folder.
3. Right-click the VM and click Migrate.
4. Click Yes on the pop-up window.
5. Click Change storage only.
6. For PowerFlex hyperconverged nodes and PowerFlex compute-only nodes, migrate the nodes to PowerFlex volumes. These
will be mapped after the PowerFlex deployment.
7. Repeat these steps for all of the vCLS VMs.
Set V-Tree compression mode for PowerFlex GUI
Use this procedure to enable compression mode according to fine granularity storage pools.
About this task
Compression is done by applying a compression algorithm to the data. This procedure is only applicable for PowerFlex versions
prior to 3.5.
Steps
1. From Frontend > Volumes > V-Tree Migration view or Frontend > Volumes > V-Tree Capacity Utilization view. Go to
either Creating and Mapping Volumes or Configure and Customize Dell EMC PowerFlex.
2. Right-click the volume, and select Set V-Tree Compression Mode.
The Set V-Tree Compression Mode dialog box displays.
3. Select the Enable Compression check box.
4. Click OK.
5. In the Are you sure? dialog box, click OK.
6. Click Close.
The compression mode is enabled for V-Tree.
Defining rebuild and rebalance settings for PowerFlex
Set rebuild and rebalance settings
Use this procedure to define rebuild and rebalance settings before and after RCM upgrades.
Steps
1. Select the following options to set the network throttling:
a. Log in to PowerFlex GUI presentation server using the primary MDM IP address.
Administering the storage
81
b. From the Configuration tab, click Protection domain and select the protection domain. Click Modify and choose
the Network Throttling option from the menu. From the pop-up window, verify that Unlimited is selected for all the
parameters.
c. Click Apply.
2. Select the following options to set the I/O priority:
a. Log in to the PowerFlex GUI presentation server using the primary MDM IP address.
b. From the Configuration tab, click Storage Pool and select the storage pool. Click Modify and choose the I/O Priority
option from the menu to view the current policy settings.
c. Before an RCM upgrade, set the following policies:
Policy settings
Values
Rebuild policy
Unlimited
Rebalance policy
Unlimited
Migration policy
Retain the default value
Maintenance mode policy
Limit concurrent I/O=10
If PowerFlex GUI presentation server allows, set all values to Unlimited.
d. After an RCM upgrade, set the following policies:
Policy settings
Values
Rebuild policy
Unlimited
Rebalance policy
Limit concurrent I/O=10
Migration policy
Retain the default value
Maintenance mode policy
Limit concurrent I/O=10
e. Click Apply.
Set rebuild and rebalance settings using PowerFlex versions prior to 3.5
Use this procedure to define rebuild and rebalance settings for PowerFlex versions prior to 3.5.
Steps
1. Select the following options to set the network throttling:
a. Log in to the PowerFlex GUI.
b. Click Backend > Storage.
c. Right-click protection domain, and select Set Network Throttling.
● Rebalance per SDS: Unlimited
● Rebuild per SDS: Unlimited
● Overall per SDS: Unlimited
2. Select the following options to set the I/O priority:
a. In PowerFlex GUI, click Backend > Storage.
b. Right-click storage pool, and select Set I/O Priority.
c. To set the rebuild:
● Limit concurrent I/O.
● Rebuild concurrent I/O limit: 1
d. To set the rebalance, click the Rebalance tab:
● Favor application I/O
● Rebalance concurrent I/O limit: 1
● Max speed per device: 10240
82
Administering the storage
Adding a PowerFlex storage-only node to PowerFlex
Add a PowerFlex storage-only node to PowerFlex.
About this task
NVDIMM acceleration pools are required to implement data compression.
To enable replication, see Add Storage Data Replication (SDR) to PowerFlex.
Prerequisites
Type systemctl status firewalld to verify if firewalld is enabled. If the firewalld is inactive or disabled, see
the Enabling firewall service on PowerFlex storage-only nodes and SVMs KB article to enable the service and required ports for
each PowerFlex component.
Steps
1. Deploy the embedded operating system, for example, Red Hat Enterprise Linux on the hardware server.
2. Configure IP and VLAN for the network communication and restart the network service.
3. Install the SDS, SDR, MDM, and LIA packages that you need, and start the respective service.
NOTE: The SDR package is applicable only for replication enabled service.
4. Type the following to verify whether the SDS package is installed on the node: rpm -qpi package file name.
5. Type rpm -ivh SDS package to install the SDS package:
The following example uses: rpm -ivh EMC-ScaleIO-sds-3.0-0.769.el7.x86_64.rpm:
6. Type rpm -ivh SDR package to install the Storage Data Replication (SDR) package. For example, rpm -ivh EMCScaleIO-sdr-3.x-x.xxx.el7.x86_64.rpm.
7. Type pstree to verify the SDS demon is available:
8. If using PowerFlex GUI presentation server:
a.
b.
c.
d.
e.
Log in to the PowerFlex GUI presentation server.
Click Configuration > SDS.
Select the SDS to be added to the storage pool.
Click Add Device > storage device.
Enter the path, name, storage pool, media type, and click Add Device.
9. If using a PowerFlex version prior to 3.5:
a. Log in to PowerFlex GUI management software using the primary MDM IP address.
b. Click the Backend tab.
c. Right-click the storage pool that you are adding the node to, click + and select Add SDS.
A new window opens.
d. Enter the Name for the storage pool, the IP Address, and click + to add the storage devices to the pool.
Administering the storage
83
e. To add another SDS later, repeat steps 8 through 10, enter the device Path and Name, and select the Storage Pool you
are using.
Related information
Add storage data replication to PowerFlex
Adding a PowerFlex storage-only node with an NVMe disk to
PowerFlex
Add a PowerFlex storage-only node with an NVMe drive to PowerFlex GUI.
About this task
To enable replication, see Add Storage Data Replication (SDR) to PowerFlex.
Prerequisites
Derive the disk operating system path and correlate the device slot and operating system path.
Type systemctl status firewalld to verify if firewalld is enabled. If the firewalld is inactive or disabled, see
the Enabling firewall service on PowerFlex storage-only nodes and SVMs KB article to enable the service and required ports for
each PowerFlex component.
Steps
1. Deploy the embedded operating system, for example, Red Hat Enterprise Linux on the hardware server.
2. Configure the IP address and VLAN for the network communication and restart the network service.
3. Use the following steps to install the other packages that you need and start the respective services:
● iDRAC Service Module
84
Administering the storage
● nmve-cli
● NVMe drive firmware
4. To install the iDRAC Service Module:
a.
b.
c.
d.
See the latest RCM and download the packages from Dell EMC Download Center.
Log in to the PowerFlex storage-only node.
Create a folder named /temp/ism.
Use the following commands to extract and untar the file:
# gunzip OM-iSM-Dell-Web-LX-320-1234_A00.tar.gz
# tar -xvf OM-iSM-Dell-Web-LX-320-1234_A00.tar
e. Type #cat /etc/*-release to identify the operating system installed.
f. Change directory to /tmp/ism/RHEL7/x86_64.
NOTE: The directory example is for Red Hat Enterprise Linux. Change the directory depending on the operating
system installed.
g. Use the following command to install the package:
# rpm -ivh dcism-3.2.0-1234.el7.x86_64.rpm
NOTE: If there is an issue with dependency package installation, mount the operating system ISO locally on the
server and point yum repository to the ISO mount point. Then use the yum install command to install dcism and
all its dependencies.
h. Use the following command to verify that the dcism service is running on the PowerFlex storage-only node:
#systemctl status dcismeng.service
i.
Use the following command to verify that the link local IP address (169.254.0.2) is automatically configured to the
interface iDRAC on the storage-only node after successful installation of iSM:
# ip a |grep idrac
j.
Use the following command to verify that the PowerFlex storage-only node operating system can communicate with
iDRAC using the ping command (default link local IP address for iDRAC is 169.254.0.1):
# ping 169.254.0.1
5. To install the nvme-cli tool:
a.
b.
c.
d.
Download the NVMecli package from https://rpmfind.net/ “nvme-cli-1.4-3.el7.x86_64.rpm
Log in to the PowerFlex storage-only node.
Use WinSCP to copy the downloaded package to /tmp folder.
Change directory to /tmp.
Administering the storage
85
e. Using following command to change the access permissions of the file:
rpm –qa| grep -i nvme
NOTE: The RPM package is part of operating system ISO.
6. To update the disk firmware for new NVMe drives:
a. Go to Dell EMC Download Center and download disk firmware from Express-Flash- PCIeSSD_Firmware_M5TNF_LN64_1.0.4_A01_01.BIN.
b. Log in to the PowerFlex storage-only node.
c. Go to /tmp and create a folder named diskfw.
d. Use WinSCP to copy the downloaded disk firmware package to the /tmp/diskfw folder.
e. Change the directory to /tmp/diskfw.
f. Type chmod +x Express-Flash-PCIe-SDD_Firmware_M5TNF_LN64_1.0.4_A01_01.BIN to change the
access permissions of the file.
g. Type ./Express-Flash-PCIe-SDD_Firmware_M5TNF_LN64_1.0.4_A01_01.BIN to run the package.
h. Follow the upgrade instructions to complete the upgrade.
7. Locate the Storage Data Server (SDS) RPM packages location and list the RPM files.
8. Type the following command to verify whether the SDS package is installed on the PowerFlex node: rpm -qpi package
file name
The following example uses: rpm -qpi EMC-ScaleIO-sds-2.6-10000.123.el7.x86_64.rpm:
9. Type the following command to install the SDS package: rpm -ivh SDS package
The following example uses: rpm -ivh EMC-ScaleIO-sds-2.0- 14000.231.e17.x86_64.rpm:
10. Type the following command to verify the SDS demon is available: pstree
86
Administering the storage
11. If using PowerFlex GUI presentation server:
a. Log in to the PowerFlex GUI presentation server.
b. Click Configuration > SDS.
c. Locate the newly added PowerFlex SDS, right-click, select Add Device, and choose Storage device from the dropdown list.
d. Type /dev/nvmeXXn1 where X is the value from Step 2. Provide the storage pool, verify the device type, and click Add
Device. Add all the required device, and click Add Devices.
NOTE: In case the devices are not getting added, ensure you choose Advance Settings > Advance Takeover from
the Add Device Storage page.
e. Repeat steps 8 to 10 on all the SDS, where you want to add the devices.
f. Ensure all the rebuild and balance activities are successfully completed.
g. Verify the space capacity after adding the new node.
12. If using a PowerFlex version prior to 3.5:
a. Log in to PowerFlex GUI using the primary MDM IP address.
b. Click the Backend tab.
c. Right-click the storage pool where you are adding the node and select Add SDS.
A new window opens.
d. Enter the Name for the storage pool, the IP Address, and click + to add the storage devices to the pool.
Administering the storage
87
e. To add another SDS later, repeat steps 8 through 10, enter the device Path and Name, and select the Storage Pool
that you are using.
Related information
Add storage data replication to PowerFlex
Configuring replication on PowerFlex storage-only nodes
This section describes how to enable or disable replication on PowerFlex storage-only nodes manually.
Add storage data replication to PowerFlex
Use this procedure to add storage data replication to PowerFlex.
About this task
This procedure is required when not using PowerFlex Manager.
Prerequisites
Replication is supported on PowerFlex storage-only nodes with dual CPU. The node should be migrated to an LACP bonding NIC
port design.
Steps
1. Connect to the PowerFlex GUI presentation server through https://<ipaddress>:8443 using MDM.
2. Click the Protection tab in the left pane.
NOTE: In the PowerFlex GUI version 3.5 and earlier, the tab is Replication.
88
Administering the storage
3. Click SDR > Add, and enter the storage data replication name.
4. Choose the protection domain.
5. Enter the IP address to be used and choose, and click Add IP. Repeat this for each IP address you are adding, and click Add
SDR.
NOTE: While adding storage data replication it is recommended to add IP addresses for flex-data1-<vlanid>, flex-data2<vlanid>, flex-data3-<vlanid>, and flex-data4-<vlanid> along with flex-rep1-<vlanid>, and flex-rep2-<vlanid>. Choose the
role of Application and Storage for all data IP addresses and choose role as External for the replication IP addresses.
6. Repeat steps 3 through 5 for all the storage data replicator you are adding. If you are expanding a replication-enabled
PowerFlex node, skip steps 7 through 11.
7. Click Protection > Journal Capacity > Add, and provide the capacity percentage as 10%, which is the default. You can
customize if needed.
8. Extract and add the MDM certificate:
NOTE: You can perform steps 8 through 13 only when the Secondary Site is up and running.
a. Log in to the primary MDM, by using the SSH on source and destination.
b. Type scli command scli --login --username admin. Provide the MDM cluster password, when prompted.
c. See the following example and run the command to extract the certificate on source and destination primary MDM.
Example for source: scli --extract_root_ca --certificate_file /tmp/Source.crt
Example for destination: scli --extract_root_ca --certificate_file /tmp/destination.crt
d. Copy the extracted certificated of source (primary MDM) to destination (primary MDM) using the SCP and vice-versa.
e. See the following example to add the copied certificate:
Example for source: scli --add_trusted_ca --certificate_file /tmp/destination.crt --comment
destination_crt
Example for destination: scli --add_trusted_ca --certificate_file /tmp/source.crt --comment
source_crt
f. Type scli --list_trusted_ca to verify the added certificate.
9. Once all the Journal Capacity is set, log in to the primary MDM via SSH, and log in to the scli command using the following
command for adding the Peer:
a. Type scli --login --username admin once prompted for password, provide MDM cluster password. Make a
note of the system ID, because you need it in the subsequent step.
b. Use the following appropriate command to add the peer system:
● To add a peer system on the primary site: scli --add_replication_peer_system --peer_system_ip
<destination primary mdm Mgmt ip,destination secondary mdm Mgmt ip> --peer_system_id
(system id of destination site) --peer_system_name (destination sitename).
● To add a peer system on the remote site: scli --add_replication_peer_system --peer_system_ip
(primary mdm Mgmt ip,secondary mdm Mgmt ip) --peer_system_id (system id of primary
site) --peer_system_name (primary sitename).
NOTE:
● For a three node cluster, add two management IP addresses (primary and secondary).
● For a five node cluster, add three management IP addresses (primary, secondary, and tertiary).
10. Create the remote consistency group (RCG):
Connect to the PowerFlex GUI presentation server through https://<ipaddress>:8443.
NOTE: Use the primary MDM IP and credentials to log in to the PowerFlex cluster.
11. Click the Protection tab from the left pane.
12. Choose RCG (Remote Consistency Group), and click ADD.
13. On the General tab:
a.
b.
c.
d.
Enter the RCG name and RPO.
Select the Source Protection Domain from the drop-down list.
Select the target system and Target protection domain from the drop-down list, and click Next.
Under the Pair tab, select the source and destination volumes.
Administering the storage
89
NOTE: The source and destination volumes must be identical in size and provisioning type. Do not map the volume
on the destination site of a volume pair. Retain the read-only permission. Do not create a pair containing a
destination volume that is mapped to the SDCs with a read_write permission.
e. Click Add pair, select the added pair to be replicated, and click Next.
f. In the Review Pairs tab, select the added pair, and select Add RCG, and start replication according to the requirement.
Disable replication on the PowerFlex storage-only nodes
Freeze the remote consistency group
Perform this procedure to freeze the remote consistency group (RCG). Freeze stops writing data from the target journal to the
target volume. Use this option while creating a snapshot or copying the replicated volume.
Steps
1. Connect to the PowerFlex GUI presentation server through https://<ipaddress>:8443 using MDM.
2. From the left pane, click Protection > RCGs.
3. In the right pane, select the relevant RCG check box, and click More > Freeze Apply.
4. Verify that the operation completes successfully and click Dismiss.
Remove the remote consistency group
Use this procedure to remove the volume pairs and stop all remote consistency group (RCG) replication input and output.
Steps
1. Connect to the PowerFlex GUI presentation server through https://<ipaddress>:8443 using MDM.
2. From the left pane, click Protection > RCGs.
3. In the right pane, select the relevant RCG, and click More > Remove RCG.
4. Verify that the operation completes successfully, and click Dismiss.
Remove a peer system
Use this procedure to remove replication between the peer systems.
Steps
1. Connect to the PowerFlex GUI presentation server through https://<ipaddress>:8443 using MDM.
2. From the left pane, click Protection > Peer Systems.
3. In the right pane, select the relevant peer system, and click Remove.
4. Verify that the operation completes successfully, and click Dismiss.
Remove replication trust for peer system
Use this optional procedure to remove the trusted certificates from the source and target systems.
Steps
1. Open an SSH session using PuTTY or a similar SSH client.
2. Log in to the primary MDM with admin credentials.
3. In the PowerFlex CLI, type scli --list_trusted_ca to display the list of trusted certificates in the system. Note the
fingerprint details.
4. Type scli --remove_trusted_ca --fingerprint <fingerprint> to remove the certificate.
5. Verify that the following message is received:
90
Administering the storage
The Certificate was successfully removed.
6. Type rm /tmp/target.crt scli --list_trusted_ca
9A:14:00:5F:3F:A0:01:73:D9:8F:69:E3:9C:53:C5:FB:CB:7B:AE:CA scli --remove_trusted_ca
--fingerprint 9A:14:00:5F:3F:A0:01:73:D9:8F:69:E3:9C:53:C5:FB:CB:7B:AE:CA
and rm /tmp/source.crt scli --list_trusted_ca
E4:07:A4:BF:A3:2B:6B:DD:93:F4:76:87:C0:8A:8C:6D:31:83:7A:23 scli --remove_trusted_ca -fingerprint E4:07:A4:BF:A3:2B:6B:DD:93:F4:76:87:C0:8A:8C:6D:31:83:7A:23 to remove the source
and target certificates.
7. Verify that the following message is received:
The Certificate was successfully removed.
Enter SDS in maintenance mode
Use this procedure to place an SDS into maintenance mode to perform nondisruptive maintenance on the SDS.
About this task
Perform this procedure if you need to clean the network configurations.
Steps
1. Connect to the PowerFlex GUI presentation server through https://<ipaddress>:8443 using MDM.
2. In the left pane, click Configuration > SDSs.
3. In the right pane, select the relevant SDS and click More > Enter Maintenance Mode.
4. In the Enter SDS into Maintenance Mode dialog box, select Instant. If maintenance mode takes more than 30 minutes,
select PMM.
5. Click Enter Maintenance Mode.
6. Verify that the operation completes successfully and click Dismiss.
Remove storage data replication from PowerFlex
Use this procedure to remove storage data replication from PowerFlex.
Steps
1. Connect to the PowerFlex GUI presentation server through https://<ipaddress>:8443 using MDM.
2. In the left pane, click Protection > SDRs.
3. In the right pane, select the SDR Name and click More > Remove.
4. Repeat for all SDRs.
Remove a storage data replication RPM
Use this procedure to remove a storage data replication RPM.
Steps
1. SSH to the PowerFlex node.
2. List all installed Dell EMC RPMs on a PowerFlex node by entering the following command: rpm -qa | grep -i emc.
3. Identify the SDR rpm - EMC-ScaleIO-sdr-x.x.xxx.el7.x86_64.rpm.
4. Remove the RPM by entering the following command: rpm -e EMC-ScaleIO-sdr-x.x.xxx.el7.x86_64.rpm
5. Verify that RPM is removed and the service is stopped.
Administering the storage
91
Clean up network configurations
Use this procedure to clean a network configuration.
About this task
If this network is used for other functions, these steps are optional.
Steps
1. Remove the route-bond# files that are associated with the replication network, using the following commands:
cd /etc/sysconfig/network-scripts/
rm route-bond(x).xxx
Repeat this command for the second route.
2. Remove the ifcfg-bond# files that are associated with the replication network, using the following commands:
cd /etc/sysconfig/network-scripts/
rm ifcfg-bond(x).xxx
Repeat this command for the second interface.
Exit SDS in maintenance mode
Use this procedure to exit an SDS from maintenance mode.
Steps
1. Connect to the PowerFlex GUI presentation server through https://<ipaddress>:8443 using MDM.
2. In the left pane, click Configuration > SDSs.
3. In the right pane, select the relevant SDS and click More > Exit Maintenance Mode.
4. In the Exit SDS into Maintenance Mode dialog box, select Instant.
5. Click Exit Maintenance Mode.
6. Verify that the operation completes successfully and click Dismiss.
Repeat for each PowerFlex node in the protection domain.
Remove journal capacity
Use this procedure to remove the journal capacity.
Steps
1. Connect to the PowerFlex GUI presentation server through https://<ipaddress>:8443 using MDM.
2. From the left pane, click Protection > Journal Capacity.
3. In the right pane, select the Protection Domain, and click Remove.
4. Verify that the operation completes successfully and click Dismiss.
Remove target volumes from the destination system
Use this procedure to remove target volumes from the destination system.
Steps
1. Connect to the PowerFlex GUI presentation server through https://<ipaddress>:8443 using MDM.
92
Administering the storage
2. Remove the volumes used as target in the volume pair.
3. From the left pane, click Configuration > Volumes.
4. In the right pane, select the target volumes.
5. Click More > Remove.
6. Select Remove volume with all of its snapshots.
7. Click Remove.
8. Verify that the operation completes successfully and click Dismiss.
Remote asynchronous replication on PowerFlex hyperconverged
nodes
This section describes how to perform various remote replication operations from PowerFlex presentation server UI.
Remote asynchronous replication ensures data protection of the PowerFlex environment. It creates a remote copy of one
volume from one cluster to another. PowerFlex supports asynchronous replication.
Setting up the peer system is the first step when configuring remote protection. The volumes from each of the systems must be
the same size. If the network is up, the systems should be connected.
Remote consistency group
Remote consistency group (RCG) is an entity that includes a set of consistent volume pairs. The volume on the source from
a single Protection Domain is replicated to a remote volume from a single Protection Domain on the target. This creates a
consistent pair of volumes.
When replication is first activated for an RCG, the target volumes need to be synchronized with the source volumes. For each
volume pair, the entire contents of each source volume are copied to the corresponding target volume. When there is more than
one volume pair in the RCG, the order in which the volumes will be synchronized is determined by the order in which the volume
pairs were created. The initial synchronization occurs while all applications are running and performing I/O. Any writes to an
area of the volume that has already been synchronized will be sent to the journal. Writes to an area of the volume that has not
already been synchronized will be ignored, as the updated content will be copied over eventually as part of the synchronization.
The initial synchronization can also take place while the system is offline, however the application I/O must first be paused. You
can add and manage RCG on both the source and target systems
Replication direction and mapping
Replication direction and mapping according to subsequent RCG operations and possible actions.
Subsequent RCG
operations
Possible actions
Replication direction or
access
Access to volumes
Normal
Switchover/test failover/
failover
A to B
Access to the volumes is
allowed only through the
source (system A).
N/A - Data is not replicated
By default, access to the
volume is allowed through the
original target (system B). It
is possible to enable access
through the original source
(system A).
B to A
Access to the volumes is
allowed only through the
original target (system B).
Remove
After failover
Reverse/restore
Remove
After failover + reverse
NOTE: Switchover and
test failover are only
possible after the peers
are synchronized.
Switchover/test failover/
failover
Remove
Administering the storage
93
Subsequent RCG
operations
Possible actions
Replication direction or
access
Access to volumes
After failover + restore
NOTE: Switchover and
test failover are only
possible after the peers
are synchronized.
Switchover/test failover/
failover
A to B
Access to the volumes is
allowed only through the
source (system A).
After switchover
Switchover/test failover
stop/failover
B to A
Access to the volumes is
allowed only through the
original target (system B).
A to B
Access to the volumes is
allowed through both systems
(system A and system B).
Remove
Remove
After test failover
Switchover/test failover/
failover
Remove
Add the replication consistency group
Use this procedure to add the replication consistency group.
Steps
1. Log in to the presentation server, https://presentation_server_IP:8443.
2. From the left pane, click Protection > RCGs.
3. From the right pane, click Add.
4. In the Add RCG wizard, enter the information for the RCG.
5. From the General page:
a. Enter the RCG Name.
b. Enter the number of recovery point objective (RPO) minutes. This is the amount of time of data loss tolerated if
replication between the systems is compromised.
NOTE: It is recommended to enter the minimal amount of time the feature allows, which is 15 seconds.
c. Select the Source Protection Domain.
d. Select the Target System.
e. Select the Target Protection Domain.
6. Click Next.
7. From the Add Replication Pairs page:
a. Click the volume from the Source column and click the corresponding size volume from the Target column.
b. Click Add Pair.
c. Click Next.
8. From the Review Pairs page:
a. Ensure the correct source and volume pair are selected and click Add and Activate.
b. Verify that the operation completed successfully and click Dismiss.
The RCG is added to the source and target systems. Wait for the end of the initial copy transmit before using.
9. Find the current copy status:
a. Log in to the primary MDM using SSH and type scli --login --username admin.
b. To verify the replication status, type scli --query_all_replication_pairs.
After the initial copy is complete, the PowerFlex replication system is ready for use.
94
Administering the storage
Modify the recovery point objective
Use this procedure to modify the recovery point objective (RPO) time.
Steps
1. Log in to the PowerFlex GUI.
2. From the left pane, click Protection > RCGs.
3. From the right pane, select the relevant RCG check box, and click Modify > Modify RPO.
4. From the Modify RPO for RCG <rcg name> dialog box, enter the updated RPO time and click Apply.
5. Verify the operation completed successfully and click Dismiss.
Add a replication pair to the replication consistency group
Use this procedure to add a replication pair to the replication consistency group (RCG).
Steps
1. Log in to the PowerFlex GUI and from the left pane, click Protection > RCGs.
2. From the right pane, select the relevant RCG check box and click Modify > Add Pair.
3. In the Add Pairs wizard, from the Add Replication Pairs page, select a volume from the source and a volume from the
target and click Add Pair.
4. Click Next.
5. From the Review Pairs page, verify the selected volumes are correct, and click Add Pairs.
Unpair from the replication consistency group
Use this procedure to unpair from the replication consistency group (RCP).
Steps
1. Log in to the PowerFlex GUI and from the left pane, click Protection > RCGs.
2. From the right pane, select the relevant RCG check box, in the Details pane, click the Volumes Pairs tab and click Unpair.
3. From the Remove Pair from RCG <RCG name> dialog box, click Remove Pair.
4. Verify the operation completed successfully and click Dismiss.
Freeze the replication consistency group
Use this procedure to freeze the replication consistency group (RCG).
About this task
Freezing the RCG stops writing data from the target journal to the target volume. This option is used while creating a snapshot
or copy of the replicated volume.
Steps
1. Log in to the PowerFlex GUI and from the left pane, click Protection > RCGs.
2. From the right pane, select the relevant RCG check box, and click More > Freeze Apply.
3. Click Freeze Apply.
4. Verify the operation completed successfully and click Dismiss.
Administering the storage
95
Unfreeze the replication consistency group
Use this procedure to unfreeze the replication consistency group (RCG) to resume data transfer from the target journal to
target volume.
Steps
1. Log in to the PowerFlex GUI and from the left pane, click Protection > RCGs.
2. From the right pane, select the relevant RCG check box, and click More > Unfreeze Apply.
3. Click Unfreeze Apply.
4. Verify the operation completed successfully and click Dismiss.
Set the target to consistent mode
Use this procedure to set the target to consistent mode.
About this task
As data is transferred from source to target, the SDR verifies that the data in the journal is consistent with the data from the
source. The SDR then sends an apply to the journal to prompt SDR to send data to the volume.
Steps
1. Log in to the PowerFlex GUI and from the left pane, click Protection > RCGs.
2. From the right pane, select the relevant RCG check box, and click Modify > Set Target to Consistent Mode.
3. From the Set Target to Consistent Mode RCG <RCG name> dialog box, click Apply.
4. Verify the operation completed successfully and click Dismiss.
Setting the target to inconsistent mode
Use this procedure to set the target to inconsistent mode.
About this task
Set the target to inconsistent mode to pause apply from the target journal to the target volume until the source journal has
completed sending data to the target journal. If there is no consistent image on the target journal, then the system does not
apply.
NOTE: It is recommended to take a snapshot of the target before setting the target to inconsistent mode for recovery
purposes of a consistent snapshot.
Steps
1. From the PowerFlex GUI, in the left pane, click Protection > RCGs.
2. In the right pane, select the relevant RCG check box, and click Modify > Set Target to Inconsistent Mode.
3. In the Set Target to Inconsistent Mode RCG <RCG name> dialog box, click Apply.
4. Verify that the operation completed successfully and click Dismiss.
Running a test failover
Use this procedure to run a test failover of the latest copy of snapshots of source and target systems before running a failover.
About this task
Running a test failover provides the following:
● Enables you to perform resource-intensive operations on secondary storage without impacting production
● Test application upgrades on the target system without production impact
● Ability to attach different, and higher-performing compute systems or media in the target environment
96
Administering the storage
●
●
●
●
Ability to attach systems with different hardware attributes such as GPUs in the target domain
Ability to run analytics on the data without impeding your operational systems
Perform what-if actions on the data because that data will not be written back to prod
Eliminates many manual storage tasks because the test is fully automated along with the snapshots
Prerequisites
Ensure replication is still running and is in a healthy state.
Before running a test failover, map the target volumes with appropriate access mode. By default, volumes are mapped with
read_write access. This creates a conflict with the mapping of target volumes, since Powerflex set the remote access mode
of the Replication Consistency Group (RCG) point-of-view to read_only. This is incompatible with the default mapping access
mode of read_write volume mapping offered by the Powerflex GUI, therefore log onto the target system and manually map all
volumes in the RCG to the target system using scli command.
Example: # scli --map_volume_to_sdc --volume_name volume1 --sdc_id 47c091f200000004 -access_mode read_only
Once the remote volumes are mapped, we can test the RCG failover, the test failover command:
● Creates a snapshot on the target system for all volumes attached to the RCG.
● Replaces the pointer used by the volume mapping for each volume with a pointer to its snapshot.
● Changes the access mode of the volume mapping of each volume on the target system to read_write.
A test failover operation is only possible after the peers are synchronized.
Steps
1. From the PowerFlex GUI, in the left pane, click Protection > RCGs.
2. In the right pane, select the relevant RCG check box, and click More > Test Failover.
3. In the RCG <RCG name> Test Failover dialog box, click Start Test Failover.
4. In the RCG <RCG name> Test Failover using target volumes dialog box, click Proceed.
5. Verify that the operation completed successfully and click Dismiss.
Stopping test failover
This procedure automatically deletes the snapshots created during test failover.
Steps
1. From the PowerFlex GUI, in the left pane, click Protection > RCGs.
2. In the right pane, select the relevant RCG check box, and click More > Test Failover Stop.
3. Click Approve.
4. Verify that the operation completed successfully and click Dismiss.
Restoring replication
Use this procedure to restore replication when the remote consistency group (RCG) is in failover.
About this task
When the RCG is in failover mode, you can reverse or restore the replication. Restoring replication maintains the replication
direction from the original source and overwrites all data at the target. This option may be selected from either source or target
systems.
Prerequisites
This option is available when RCG is in failover mode, or when the target system is not available. It is recommended to take a
snapshot of the original destination before restoring the replication for backup purposes.
Administering the storage
97
Steps
1. From the PowerFlex GUI, in the left pane, click Protection > RCGs.
2. In the right pane, select the relevant RCG check box, and click More > Restore.
3. In the Restore Replication RCG <RCG name> dialog box, click Apply.
4. Verify that the operation completed successfully and click Dismiss.
Reversing replication
Use this procedure to reverse replication if the remote consistency group (RCG) is in failover or switchover mode.
About this task
When the RCG is in failover or switchover mode, you can reverse or restore the replication. Reversing replication changes the
direction so that the original target becomes the source. All data at the original source is overwritten by the data at the target.
This option may be selected from either source or target systems.
Prerequisites
This option is available when RCG is in failover mode, or when the target system is not available. It is recommended to take a
snapshot of the original source before reversing the replication for backup purposes.
Steps
1. From the PowerFlex GUI, in the left pane, click Protection > RCGs.
2. In the right pane, select the relevant RCG check box, and click More > Reverse.
3. In the Restore Replication RCG <RCG name> dialog box, click Apply.
4. Verify that the operation completed successfully and click Dismiss.
Running a failover
Use this procedure to failover the source role to the target system.
About this task
If the system is not healthy, you can failover the source role to the target system. When the source is compromised, data
from the host stops sending I/Os to the source volume, replication is then stopped, and the target system takes on the role of
source. The host on the target starts sending I/Os to the volume. The target takes on the role of source, and the source takes
on the role of target.
There are two options when choosing to failover a remote consistency group (RCG):
● Switchover -This option is a complete synchronization and failover between the source and the target. Application I/Os are
stopped at the source, and the source and target volumes are synchronized. Access mode is changed of the target volumes
to the target host, the roles are switched, and finally new source volumes access mode are changed to read/write.
● Latest PiT - The system prevents any write to the source volumes.
Prerequisites
Before performing failover, ensure you stop the application and unmount the file-systems at the source (if the source is
available). Target volumes are only be mapped after performing a failover. Target volumes can also be mapped using scli.
Steps
1. From the PowerFlex GUI, in the left pane, click Protection > RCGs.
2. In the right pane, select the relevant RCG check box, and click More > Failover.
3. In the Failover RCG <RCG name> dialog box, select one of the following options:
● Switchover: (sync and failover)
● Latest PiT: (date and time)
4. Click Apply Failover.
5. In the RCG <RCG name> Sync & Failover dialog box, click Proceed.
98
Administering the storage
6. Verify that the operation completed successfully and click Dismiss.
7. From the top right, click Running Jobs and check the progress of the failover.
Creating a snapshot of the remote consistency group (RCG) volume
Use this procedure to create a snapshot of the RCG.
About this task
Create a snapshot of the RCG volume from the target system. The latest image of the volume is used for the snapshot. When
creating a snapshot, the RCG enters a freeze mode.
Steps
1. From the PowerFlex GUI, in the left pane, click Protection > RCGs.
2. In the right pane, select the relevant RCG check box, and click More > Create Snapshots.
3. In the Create Snapshots RCG <RCG name> dialog box, click Create Snapshots.
4. Verify that the operation completed successfully and click Dismiss.
Pausing the remote consistency group
Use this procedure to pause the replication for the remote consistency group (RCG).
About this task
Pausing stops the transfer of data from the source to the target.
Steps
1. From the PowerFlex GUI, in the left pane, click Protection > RCGs.
2. In the right pane, select the relevant RCG check box, and click More > Pause RCG.
3. In the Pause RCG <RCG name> dialog box, click one of the following options:
● Stop data transfer - this option saves all the data in the source journal volume until there is not any available capacity.
● Track Changes - this option enables manual slim mode where only metadata in the source journal volumes is saved.
4. Click Pause.
5. Verify that the operation completed successfully and click Dismiss.
Resuming the remote consistency group
Use this procedure to resume replication for the remote consistency group (RCG). Resuming starts the transfer of data from
the source to the target.
Steps
1. From the PowerFlex GUI, in the left pane, click Protection > RCGs.
2. In the right pane, select the relevant RCG check box, and click More > Resume.
3. From the Pause RCG <RCG name> dialog box, click one of the following options:
● Stop data transfer: Saves all the data in the source journal volume until there is not any available capacity.
● Track changes: Enables manual slim mode where only metadata in the source journal volumes is saved.
4. Click Resume RCGs.
5. Verify the operation completed successfully and click Dismiss.
Administering the storage
99
Pausing the initial copy
Use this procedure to pause replication of the initial copy from the source to the target.
Steps
1. From the PowerFlex GUI, in the left pane, click Protection > RCGs.
2. In the right pane, select the relevant RCG check box, and click Initial copy > Pause Initial copy.
3. In the Pause Initial Copy <RCG name> dialog box, click Pause Initial Copy.
Resuming the initial copy
Use this procedure to resuming replication of the initial copy from the source to the target.
Steps
1. From the PowerFlex GUI, in the left pane, click Protection > RCGs.
2. In the right pane, select the relevant RCG check box, and click More > Resume.
3. In the Resume Initial Copy <RCG name> dialog box, click Resume Initial Copy.
4. Verify that the operation completed successfully and click Dismiss.
Setting priority
Use this procedure to set the order priority for copying volume pairs.
About this task
Set the priority to the highest priority for pairs to be copied first, or set to the lowest priority to be copied last.
NOTE: Setting the priority is only valid during initial copy.
Steps
1. From the PowerFlex GUI, in the left pane, click Protection > RCGs.
2. In the right pane, select the relevant RCG check box.
3. In the Volumes Pairs tab, click Initial copy > Set Priority.
4. In the Set Priority for Pair <RCG name> dialog box, select Default or High and click Save.
5. Verify that the operation completed successfully and click Dismiss.
Mapping remote consistency groups to the Storage Data Clients (SDC)
Use this procedure to designate which SDCs can access the remote consistency groups (RCGs) from the target volumes.
Prerequisites
This mapping is only enabled from the target RCG.
Steps
1. From the PowerFlex GUI, in the left pane, click Protection > RCGs.
2. In the right pane, click the relevant RCG check box and click Mapping > Map.
3. In the Map RCG Target Volumes dialog box, click the relevant SDC check box, and click Map.
4. In the Mappings section of the dialog box, select the volume check box and select the access mode.
NOTE: Read Access mode applies to all platforms, except Windows clusters, which require the No Access mode.
5. In the Map RCG Target Volumes dialog box, click Map RCG Target Volumes.
6. Click Apply.
100
Administering the storage
7. Verify that the operation completed successfully and click Dismiss.
Unmapping an Storage Data Client (SDC) from the remote consistency
group target volumes
Use this procedure to unmap an Storage Data Client (SDC) from the remote consistency group (RCG) target volumes.
Steps
1. From the PowerFlex GUI, in the left pane, click Protection > RCGs.
2. In the right pane, click the relevant RCG check box and click Mapping > Unmap.
3. In the Unmap dialog box, click the relevant SDC check box, and click Unmap.
4. Verify that the operation completed successfully and click Dismiss.
Mounting a VMFS datastore copy on the target VMware ESXi cluster
Use this procedure to mount a VMFS datastore copy on the target VMware ESXi cluster.
Prerequisites
Ensure you perform a storage rescan on your host to update the view of storage devices that are presented to the host.
Steps
1. In the VMware vSphere web client navigator, browse to a host, a cluster, or a data center.
2. From the right-click menu, select Storage > New datastore.
3. Select VMFS as the datastore type.
4. Enter the datastore name and if necessary, select the placement location for the datastore.
5. From the list of storage devices, select the volume that is mapped to the cluster, and click Next.
6. Select Keep existing signature and click Next.
NOTE: Assign a new signature option is only recommended when you want to mount the volume on the same
VMware ESXi host where you have original volume present. Also, be aware creating a new signature is irreversible
operation.
7. Click Finish.
8. Click OK.
9. Rescan for new VMFS volumes
a. In the VMware vSphere client, browse to a host, a cluster, or a data center.
b. From the right-click menu, select Storage > Rescan Storage > Scan for new VMFS Volumes.
c. Click OK.
Determining and switching the PowerFlex Metadata Manager
Manually fail over the PowerFlex Metadata Manager (MDM) before rebooting it.
About this task
Use any one of the following methods to verify the MDM is the primary MDM:
● If your system was not deployed using the PowerFlex plug-in, you can use the PowerFlex GUI management software.
● If your system was deployed using PowerFlex Manager, you can use PowerFlex Manager.
Administering the storage
101
Steps
1. Optional: If your system was not deployed using the PowerFlex plug-in, in the PowerFlex CLI type the following for each
MDM:
esxcli system module parameters list -m scini | grep IoctlMdmIPStr
2. Log in to the primary or secondary MDM using PowerFlex GUI management software.
3. Click MDM > MDM to determine which Storage VM (SVM) is the primary MDM.
4. Access the primary MDM:
a. In a hyperconverged deployment, use SSH to connect to the SVM that is acting as primary MDM.
b. In a two-layer deployment, connect to the storage-only node that is acting as primary MDM using SSH.
5. From the PowerFlex CLI, type scli --login --username admin --password MDM_password to connect to the
source node.
6. Type scli --query_cluster to re-verify the primary MDM.
7. Type scli --switch_mdm_ownership to switch the primary MDM to the secondary MDM.
8. Type scli --query_cluster to re-verify the primary MDM.
9. Connect to the new SVM that is acting as primary MDM using SSH.
10. Type scli --query_all_sds and verify that all servers are connected.
11. Type scli --query_all_sdc and verify that all servers are connected.
102
Administering the storage
Rebooting PowerFlex nodes
Rebooting a PowerFlex node in a hyperconverged deployment using
PowerFlex GUI presentation server
Use the PowerFlex GUI presentation server to reboot a PowerFlex node.
Steps
1. Log in to the PowerFlex GUI presentation server using the primary MDM IP address.
2. Click the Configuration tab, and click SDS.
3. Select the specific SDS to be taken offline.
4. Click More, select Enter Maintenance Mode.
5. Select Protected and click Enter Maintenance Mode.
6. Log in to the VMware vCenter web server, and perform the following:
a.
b.
c.
d.
e.
Shut down the Storage VM (SVM).
Put the PowerFlex node in VMware ESXi maintenance mode.
Reboot the PowerFlex node.
Take the PowerFlex node out of VMware ESXi maintenance mode.
Power on the SVM.
7. Perform the following steps in the PowerFlex GUI presentation server:
a. Take the SVM out of PowerFlex maintenance mode.
b. On the Dashboard, verify that there are no rebuild on rebalance activities before rebooting the next node.
Rebooting a PowerFlex node in a hyperconverged deployment using
PowerFlex GUI
Use this procedure for PowerFlex versions prior to 3.5. Use the PowerFlex GUI to reboot a PowerFlex node.
Steps
1. Log in to the PowerFlex GUI using the primary MDM IP address.
2. In the PowerFlex GUI, select Backend > Storage Pool, and right-click SDS > Enter Maintenance Mode.
3. Click Protected, and click Enter Maintenance Mode.
4. Log in to the VMware vCenter web server, and perform the following:
a.
b.
c.
d.
e.
Shut down the Storage VM (SVM).
Put the PowerFlex node in VMware ESXi maintenance mode.
Reboot the PowerFlex node.
Take the PowerFlex node out of VMware ESXi maintenance mode.
Power on the SVM.
5. Perform the following in the PowerFlex GUI:
a. Take the SVM out of PowerFlex maintenance mode.
b. On the Dashboard, verify that there are no rebuild on rebalance activities before rebooting the next node.
Rebooting a PowerFlex rack storage node in a two-layer deployment using
the PowerFlex GUI presentation server
Use the PowerFlex GUI presentation server to reboot a PowerFlex node.
Steps
1. Log in to the PowerFlex GUI presentation server, click the Configuration tab, and click SDS.
Administering the storage
103
2. Select the specific SDS to take offline.
3. Click More, select Enter Maintenance Mode.
4. Select Protected, and click Enter Maintenance Mode.
5. Using iDRAC, reboot the PowerFlex storage-only node.
6. Take the SDS pool out of PowerFlex maintenance mode.
7. Verify that there are no rebuild/rebalance activities before rebooting the next node.
Rebooting a PowerFlex rack storage node in a two-layer deployment using
PowerFlex GUI
Use the PowerFlex GUI to reboot a PowerFlex node for PowerFlex versions prior to 3.5.
Steps
1. In the PowerFlex GUI, select Backend > Storage Pool and right-click SDS > Enter Maintenance Mode.
2. Select Protected, and click Enter Maintenance Mode.
3. Using iDRAC, reboot the PowerFlex storage-only node.
4. Take the SDS pool out of PowerFlex maintenance mode.
5. Verify that there are no rebuild/rebalance activities before rebooting the next node.
Entering protected maintenance mode using the PowerFlex GUI
presentation server
Use this procedure to enter protected maintenance mode using the PowerFlex GUI presentation server.
Steps
1. Log in to PowerFlex GUI presentation server.
2. Go to Configuration > SDS > Select SDS, and click Enter Maintenance Mode.
3. Select Protected > Enter Maintenance Mode.
4. Verify that node is in the Maintenance state.
Exiting protected maintenance mode using the PowerFlex GUI
presentation server
Use this procedure to exit protected maintenance mode using the PowerFlex GUI presentation server.
Steps
1. Log in to PowerFlex GUI presentation server.
2. Go to Configuration > SDS.
3. Select the node in protected maintenance mode, and click More > Exit Maintenance Mode.
4. Verify that there are no rebuild/rebalance activities before rebooting the next node.
104
Administering the storage
Adding and removing a PowerFlex device for server maintenance
Adding and removing a PowerFlex device for server maintenance in a
hyperconverged deployment (VMware)
Add or remove PowerFlex devices to balance the server load before performing server maintenance, and to avoid a server
rebuild and maintain server performance.
Steps
1. Open VMware vCenter web client from the jump server.
2. Identify each Storage VM (SVM) and its VMware ESXi host server.
3. Log in to the first PowerFlex Metadata Manager (MDM), using the PowerFlex GUI.
4. Verify that there are no rebuild or rebalance activities:
● If using a PowerFlex version prior to 3.5: Verify that there is no activity in the Rebalance or Rebuild panes.
5. Click Backend and record all Storage Data Server (SDS) devices and names.
6. Right-click the SDS device, and click Remove for each device.
A rebuild occurs.
Administering the storage
105
7. If using the PowerFlex GUI presentation server: Click Dashboard, and verify that there are no rebuild and rebalance
activities.
a.
b.
c.
d.
e.
From the left pane, click Configuration > Devices.
Select the devices.
From the upper right menu, click More > Remove.
From the Remove Device dialog box, click Remove.
Click Dismiss.
8. When the rebuild is complete, perform any necessary SVM work.
9. Using VMware vCenter, shut down the SVM by right-clicking the SVM and clicking Shut Down Guest OS.
10. Use VMware vSphere vMotion to migrate any VMs other than the SVM to other hosts in the VMware vSphere cluster.
11. Right-click the VMware ESXi host and click Enter Maintenance Mode.
12. Reboot the VMware ESXi host and perform maintenance.
13. Allow the host to reboot, right-click the VMware ESXi host, and click Exit Maintenance Mode.
14. Start the SVM on the host.
15. After the SDS is connected, add back the previously deleted devices using the PowerFlex GUI (for PowerFlex version 3.5
and later):
a.
b.
c.
d.
Go to the PowerFlex GUI presentation server, https://<ipaddress>:8443 using the MDM.
Click Configuration > SDS.
Find the newly added SDS, right-click and select Add Device. Click Storage device from the drop-down menu.
Type /dev/sdX where x is the device IDs removed as part of the earlier removal. Provide the storage pool, verify the
device type, and click Add Device. Add all of the required devices.
16. Allow the rebalance to complete.
Adding and removing a PowerFlex device for server maintenance in a twolayer deployment
Add or remove PowerFlex devices to balance server load before performing server maintenance, in order to avoid a server
rebuild and maintain server performance.
Steps
1. Log in to the first PowerFlex Metadata Managers (MDM) using the PowerFlex GUI.
2. Verify there is no activity in the Rebalance or Rebuild panes.
3. If using PowerFlex GUI presentation server:
a. Record all Storage Data Server (SDS) devices and names.
b. Select the Storage Data Server (SDS) to be removed, and click More > Remove for each device.
A rebuild will occur.
4. If using a PowerFlex version prior to 3.5:
a. Click Backend and record all Storage Data Server (SDS) devices and names.
106
Administering the storage
b. Right-click the SDS device and click Remove for each device.
A rebuild will occur.
5. When the rebuild is complete, perform any necessary PowerFlex storage-only node work, and restart the PowerFlex storageonly node.
6. After the SDS is connected, add back the previously deleted devices using the PowerFlex GUI (for PowerFlex version 3.5
and later):
a.
b.
c.
d.
Go to the PowerFlex GUI presentation server, https://<ipaddress>:8443 using the MDM.
Click Configuration > SDS.
Find the newly added SDS, right-click and select Add Device. Click Storage device from the drop-down menu.
Type /dev/sdX where x is the device IDs removed as part of the earlier removal. Provide the storage pool, verify the
device type, and click Add Device. Add all of the required devices.
7. Once the SDS is connected, add back the previously deleted devices using the PowerFlex GUI.
8. Allow the rebalance to complete.
Unmapping and mapping a volume
Unmapping a volume
Use this procedure to unmap an existing volume from the PowerFlex cluster using the PowerFlex GUI presentation server in the
customer cluster.
About this task
NOTE: This procedure is not applicable for PowerFlex management controller 2.0.
Steps
1. Log in to the PowerFlex GUI presentation server.
2. Click the Configuration tab.
3. Click Volumes.
Administering the storage
107
4. Select Volume, and click Mapping.
5. Click Unmap.
6. Select the nodes from the shown list and click Unmap.
Unmapping a volume using a PowerFlex version prior to 3.5
Use this procedure to unmap an existing volume from the PowerFlex cluster, using a PowerFlex version prior to 3.5 in the
customer cluster.
Steps
1. In the PowerFlex GUI, select Frontend > Volumes.
2. Expand the correct storage pool to see the mapped volumes.
3. Right-click the volume that you want to unmap and select Unmap.
4. Select the nodes from which you want to unmap this volume and click Unmap Volumes.
Mapping a volume using Windows PowerFlex compute-only node
Use this procedure to map a PowerFlex volume to a Windows PowerFlex compute-only node in the customer cluster.
About this task
For Windows PowerFlex compute-only nodes, only firmware upgrades are supported.
Steps
1. Log in to the PowerFlex GUI and click the Configuration tab.
2. Click Volumes.
3. Select volume and click Mapping and select Map.
4. Select the required Windows compute-only node and click Map.
5. Select the volume to map and click Apply.
6. Select the Windows compute-only nodes, and click Map Volumes.
7. Log in to the Windows Server compute-only node and open disk management.
8. Right-click the Windows icon, and then select Disk Management.
9. Rescan the disk by selecting Action > Rescan Disks.
10. Find the disk in the bottom frame, right-click in left area of the disk, and select Online.
11. Initialize the disk by performing the following steps:
a.
b.
c.
d.
e.
f.
Find the disk in the bottom frame, right click in right area of disk, and then select New Simple Volume.
In the New Simple Volume Wizard, click Next.
Select the default, and click Next.
Assign the drive letter, and click Next.
Select the default, and click Next.
Click Finish.
Mapping a volume to a Windows-based compute-only node using a
PowerFlex version prior to 3.5
Use this procedure to map an existing volume to a Windows-based compute-only node using a PowerFlex version prior to 3.5.
About this task
For Windows PowerFlex compute-only nodes, only firmware upgrades are supported.
Steps
1. Open the PowerFlex GUI, click Frontend, and select SDC.
108
Administering the storage
2. Windows-based compute-only nodes are listed as SDCs if configured correctly.
3. Click Frontend again, and select Volumes. Right-click the volume, and click Map.
4. Select the Windows-based compute-only nodes, and then click Map.
5. Log in to the Windows Server compute-only node.
6. To open the disk management console, perform the following steps:
a. Press Windows+R.
b. Enter diskmgmt.msc and press Enter.
7. Rescan the disk and set the disks online:
a. Click Action > Rescan Disks.
b. Right-click each Offline disk, and click Online.
8. Right-click each disk and select Initialize disk.
After initialization, the disks appear online.
9. Right-click Unallocated and select New Simple Volume.
10. Select default and click Next.
11. Assign the drive letter.
12. Select default and click Next.
13. Click Finish.
Install and configure a Windows-based compute-only node to
PowerFlex
Use this procedure to install and configure a Windows-based compute-only node to PowerFlex.
About this task
For Windows PowerFlex compute-only nodes, only firmware upgrades are supported.
Steps
1. Download the EMC-ScaleIO-sdc*.msi and LIA software.
2. Double-click EMC-ScaleIO LIA setup.
3. Accept the terms in the license agreement, and click Install.
4. Click Finish.
5. Configure the Windows-based compute-only node depending on the MDM VIP availability:
● If you know the MDM VIPs before installing the SDC component:
a. Type msiexec /i <SDC_PATH>.msi MDM_IP=<LIST_VIP_MDM_IPS>, where <SDC_PATH> is the path where
the SDC installation package is located. The <LIST_VIP_MDM_IPS> is a comma-separated list of the MDM IP
addresses or the virtual IP address of the MDM.
b. Accept the terms in the license agreement, and click Install.
c. Click Finish.
d. Permit the Windows server reboot to load the SDC driver on the server.
● If you do not know the MDM VIPs before installing the SDC component:
a. Click EMC-ScaleIO SDC setup.
b. Accept the terms in the license agreement, and click Install.
c. Click Finish.
d. Type C:\Program Files\EMC\scaleio\sdc\bin>drv_cfg.exe --add_mdm --ip <VIPs_MDMs> to
configure the node in PowerFlex.
● Applicable only if the existing network is an LACP bonding NIC:
a. Add all MDM VIP IPs, and run the command to add C:\Program
Files\EMC\scaleio\sdc\bin>drv_cfg.exe --mod_mdm_ip --ip <existing MDM VIP>-new_mdm_ip <all 4 MDM VIP>.
Administering the storage
109
Updating a PowerFlex device path
When you remove a PowerFlex device prior to moving it from one pool to another, update the device path before moving it, or
the command will fail. Use this procedure to change the path configurations of all devices, to the device path assigned in the
SDS.
Steps
From the PowerFlex CLI, type scli --update_sds_original_paths (--sds_id <ID> | --sds_name <NAME> |
--sds_ip <IP> [--sds_port <PORT>]) [--force_failed_devices].
Parameters:
● --sds_id <ID> is the SDS ID.
● --sds_name <NAME> is the SDS name.
● --sds_ip <IP> is the SDS IP address.
● --sds_port <PORT> is the port that is assigned to the SDS.
● force_failed_devices is to force update the path of error devices.
Disabling the PowerFlex RAM read cache
This section describes how to disable the PowerFlex RAM read cache on the PowerFlex storage pool, volume, and Storage Data
Server (SDS), using web clients or the PowerFlex CLI.
Disabling the PowerFlex RAM read cache with web clients
Perform the following procedures to disable the RAM read cache using the PowerFlex GUI management software and the
PowerFlex plug-in.
Disabling the Storage Data Server RAM read cache with the PowerFlex GUI presentation
server
Disable the Storage Data Server (SDS) RAM read cache with the PowerFlex GUI presentation server in the customer cluster.
Steps
1. Open the PowerFlex GUI presentation server.
2. Click Dashboard configuration > SDS.
3. Select the SDS, click Modify > cache settings, and clear the Enable Read RAM cache check box.
Disabling the RAM read cache with the PowerFlex CLI
Perform the following procedures to disable the RAM read cache with the PowerFlex CLI.
Disabling the storage pool RAM read cache with the PowerFlex CLI
Disable the storage pool RAM read cache with the PowerFlex CLI.
Steps
1. From the PowerFlex CLI, from the primary MDM, type scli --query_all |grep RAM and determine if any storage
pools use RAM read cache.
2. Type scli --query_all to determine the name(s) of the storage pools using RAM read cache.
3. Type scli --set_rmcache_usage --protection_domain_name pd_name --storage_pool_name sp_name
--use_rmcache to disable RAM read cache.
4. Type scli --query_all |grep RAM to confirm no storage pools are using RAM read cache.
110
Administering the storage
Disabling the volume RAM read cache with the PowerFlex CLI
Disable the volume RAM read cache with the PowerFlex CLI.
Steps
1. From the PowerFlex CLI, type scli --query_all_volumes to determine the volume names.
2. From the PowerFlex Metadata Manager (MDM), type scli --query_volume --volume_name volume_name for
each volume, and determine if RAM read cache is enabled.
3. If RAM read cached is enabled, type scli --set_volume_rmcache_usage --volume_name volume_name -dont_use_rmcache --i_am_sure to disable it.
4. Type scli --query_volume --volume_name volume_name for each volume, and confirm that RAM read cache is
now disabled.
Disabling the Data Server RAM read cache with the PowerFlex CLI
Disable the RAM read cache on the Storage Data Server (SDS) with the PowerFlex CLI.
Steps
1. From the PowerFlex CLI, from the primary PowerFlex Metadata Managers (MDM), type scli --query_all_sds to get
a list of PowerFlex nodes.
2. Type scli --query_sds --sds_name sds_name to determine if RAM read cache is listed for any PowerFlex nodes.
3. Type scli --disable_sds_rmcache --sds_name sds_name --i_am_sure for any PowerFlex nodes using RAM
read cache.
4. Type scli --query_sds --sds_name sds_name for each PowerFlex node, and confirm that RAM read cache is now
disabled.
Using Trusted Platform Module
Trusted Platform Module (TPM) provides hardware-based, security functions.
A TPM chip is a secure crypto-processor that is designed to carry out cryptographic operations. The chip includes multiple
physical security mechanisms to make it tamper resistant, and malicious software is unable to tamper with the security functions
of the TPM.
You can add a TPM to an encrypted VM with a minimum hardware version of 14 that uses UEFI firmware.
TPM is enabled using a system setup operation.
Enabling Trusted Platform Module
Enable Trusted Platform Module (TPM) on PowerFlex.
Prerequisites
TPM supports encryption only if the operating system supports it. For more information, see the documentation and help files
that came with your TPM.
Steps
1. Restart the computer and press type <F2> during the Power On Self Test (POST) to enter the system setup program.
2. Select Security > TPMSecurity and click Enter.
3. Select TPM Security > On.
4. Type Escape to exit.
5. Click Save/Exit if prompted.
6. Restart the computer and type <F2> during the POST to enter the system setup program.
7. Select Security > TPMSecurity and click Enter.
Administering the storage
111
8. Select TPM Activation > Activate and click Enter.
You only to need to activate TPM once.
9. If prompted, restart the computer.
Otherwise the computer will automatically restart.
Configuring a virtual Trusted Platform Module
For increased security, add a virtual cryptoprocessor that is equipped with Trusted Platform Module (TPM) technology to an
encrypted VM.
Prerequisites
● Create a VM with a minimum hardware version of 14 that uses the UEFI firmware type.
● Encrypt the VM. (For additional information, see VMware Docs.)
Steps
1. Select the VM and select VM > Settings.
2. Click Add.
3. Click Trusted Platform Module.
Removing Trusted Platform Module
After you add a virtual cryptoprocessor equipped with Trusted Platform Module (TPM) to an encrypted virtual machine, you can
remove the TPM device.
Steps
1. Select the virtual machine and select VM > Settings.
2. Select Trusted Platform Module and click Remove.
3. Click OK.
Configuring PowerFlex for Secure Remote Services
Perform the following sequential procedures to configure PowerFlex for Secure Remote Services.
Preparing to configure PowerFlex for Secure Remote Services
Verify certain configurations are in place before configuring PowerFlex for Secure Remote Services.
Steps
1. Verify Gateway v3 version 3.08 or higher is installed and configured.
2. Verify the Gateway is reachable from PowerFlex on port 9443.
3. Verify the PowerFlex management IP address you are using as the connect-in IP address is accessible from the Gateway.
4. Verify the following properties setting in gatewayUser.properties on the PowerFlex Gateway:
features.enable_esrs=true
112
Administering the storage
Licensing PowerFlex for PowerFlex management controller 1.0
PowerFlex requires a valid license for production environments and provides support entitlements.
Steps
1. Identify and copy the contents of the PowerFlex license file.
2. If using PowerFlex GUI presentation server:
a. Log in to the PowerFlex GUI presentation server.
b. Click Settings > Licenses.
c. Paste the contents of the license file into the space provided.
3. If using a PowerFlex version prior to 3.5:
a. Log in to the PowerFlex GUI.
b. Click System Settings > Licenses.
c. Paste the contents of the license file into the space provided.
Licensing PowerFlex for PowerFlex management controller 2.0
PowerFlex requires a valid license for production environments and provides support entitlements.
About this task
Verify PowerFlex licensing at the beginning of the implementation phase. The implementation is affected by any issues with the
license file.
Steps
1. Log in to the jump server using administrator credentials.
2. Copy the PowerFlex license to the primary MDM.
3. Log in to the primary MDM.
4. Type scli --mdm_ip <primary mdm ip> --set_license --license_file <path to license file> to
apply the PowerFlex license.
Adding the PowerFlex serial number to the EMC Install Base
Add the PowerFlex serial number to the EMC installation base as part of configuring PowerFlex for Secure Remote Services.
About this task
EMC Install Base tickets usually process in 1–5 days.
Steps
1. In the PowerFlex GUI management software, click admin (Superuser) > System settings, and copy the value for SW ID.
2. Direct your web browser to Inside Dell, and locate your CustomerSiteID in the attached Microsoft EXCEL file.
You can use the filtering function in Excel to quickly locate your account by Party Name.
3. Open a case to add your PowerFlex serial number to the EMC Install Base at EMC SalesForce.
4. At a minimum, provide following information:
●
●
●
●
●
ScaleIO P/N: 900-001-002 (this is considered the hardware serial number)
Site ID: CustomerSiteID (see preceding step)
SW ID: SW ID (see preceding step)
ScaleIO Code: 2.0.0.2
Remote connection: ESRS
Administering the storage
113
Editing the PowerFlex Gateway
Edit the PowerFlex Gateway to enable Secure Remote Services.
Steps
1. From the PowerFlex CLI, type the following:
set features.enable_esrs=true
vi /opt/emc/scaleio/gateway/webapps/ROOT/WEB-INF/classes/gatewayUser.properties
2. Type the following:
vi /opt/emc/scaleio/gateway/conf/server.xml
3. Replace the existing ciphers with the following:
ciphers="TLS_DHE_DSS_WITH_AES_128_CBC_SHA256,TLS_DHE_DSS_WITH_AES_128_GCM_SHA256,TLS_D
HE_RSA_WITH_AES_128_CBC_SHA,TLS_DHE_RSA_WITH_AES_128_CBC_SHA256,TLS_DHE_RSA_WITH_AES_1
28_GCM_SHA256,TLS_DHE_RSA_WITH_AES_256_CBC_SHA,TLS_DHE_RSA_WITH_AES_256_CBC_SHA256,TLS
_DHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA256,TLS_ECDH_ECDSA
_WITH_AES_128_GCM_SHA256,TLS_ECDH_RSA_WITH_AES_128_CBC_SHA256,TLS_ECDH_RSA_WITH_AES_12
8_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_
SHA256,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256,TLS_EC
DHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_
AES_256_CBC_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,
TLS_RSA_WITH_AES_128_CBC_SHA256,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_C
BC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384"
4. Type the following to restart PowerFlex Gateway services:
service scaleio-gateway restart
Adding PowerFlex Metadata Manager credentials to lockbox
Add the PowerFlex Metadata Manager (MDM) credentials to lockbox as part of configuring PowerFlex for Secure Remote
Services.
Steps
1. From the PowerFlex CLI, type the following to create lockbox:
/opt/emc/scaleio/gateway/bin/SioGWTool.sh --change_lb_passphrase --new_passphrase
VMwar3123
2. Type the following to add your MDM credentials to lockbox:
/opt/emc/scaleio/gateway/bin/SioGWTool.sh --set_mdm_credentials --mdm_user admin -mdm_password VMwar3123
Adding PowerFlex SSL certificates to lockbox
Add the PowerFlex SSL certificates to lockbox as part of configuring PowerFlex for Secure Remote Services.
About this task
You can reuse a certificate from another PowerFlex Gateway if necessary.
114
Administering the storage
Steps
1. Direct your browser to esrs_gw_ip:9443 and download the SSL Certificate.
2. Select SHA-1 Certificate > View Certificate > Copy to a file.
3. Select Root Certificate > View Certificate > Copy to a file.
4. Use WinSCP to transfer the certificates to the PowerFlex Gateway root directory.
5. From the PowerFlex CLI, type the following:
curl -k -v --basic --user admin:VMwar3123
https://sio_gw_ip/api/login
6. Copy the token to a text file.
7. Type the following to import the root certificate:
curl -k -v --basic -uadmin:Token --form "file=@certificate_file_from_web_browser"
https://sio_gw_ip/api/trustHostCertificate/Mdm
8. Type the following to import the SHA-1 certificate:
curl -k -v --basic -uadmin:Token --form "file=@certificate_file_from_web_browser"
https://sio_gw_ip/api/trustHostCertificate/Mdm
9. Type the following to restart the PowerFlex Gateway in order to re-read lockbox:
service scaleio-gateway restart
Linking an MDM to the PowerFlex GUI presentation server
Use this procedure to link an MDM to the PowerFlex GUI presentation server in the customer cluster.
Prerequisites
● Only one MDM can link at a time.
● Unlink the existing system if you want to link another system.
Steps
1. Connect to the PowerFlex GUI presentation server through https://<ipaddress>:8443/ using MDM.
2. As a one-time setup wizard, enter the primary MDM IP address and link the presentation server to primary MDM.
3. Approve certificates.
4. Enter the username and password of MDM cluster.
Unlinking an MDM from the PowerFlex GUI presentation server
Use this procedure to unlink the MDM from the PowerFlex GUI presentation server in the customer cluster.
Steps
1. Connect to the PowerFlex GUI presentation server through https://<ipaddress>:8443/, using the MDM.
2. Click Settings > Unlink System.
Registering the PowerFlex Gateway with Secure Remote Services
Register the PowerFlex Gateway as part of configuring PowerFlex for Secure Remote Services.
About this task
You only need to do this to one Secure Remote Services virtual appliance in HA environments.
Administering the storage
115
Steps
1. From the PowerFlex CLI, type the following the register PowerFlex:
/opt/emc/scaleio/gateway/bin/SioGWTool.sh --register_esrs_gateway -scaleio_gateway_ip <sio_gw_ip> --scaleio_gateway_user admin -scaleio_gateway_password VMwar3123 --esrs_gateway_ip <ESRS_GW_IP> --esrs_gateway_user
<EMC_ESRS_Enabled_Account> --esrs_gateway_password <EMC_ESRS_RSA_Token> -connect_in_ip <sio_gw_ip>
2. If using PowerFlex GUI presentation server:
a. Log in to the PowerFlex GUI presentation server.
b. Under Configuration > Devices, select the device.
c. Confirm your PowerFlex serial number is Online.
3. If using a PowerFlex version prior to 3.5:
a. Log in to the PowerFlex GUI.
b. From the PowerFlex GUI management software, select Devices > Manage Device.
c. Confirm your PowerFlex serial number is Online and Managed
4. From the PowerFlex CLI type the following to verify you are connected to Secure Remote Services:
/opt/emc/scaleio/gateway/bin/SioGWTool.sh --check_esrs_connectivity -scaleio_gateway_ip xxxxxx --scaleio_gateway_password xxxxxx --scaleio_gateway_user
admin
5. Type the following to verify that Secure Remote Services started successfully:
/opt/emc/scaleio/gateway/bin/SioGWTool.sh --start_esrs --scaleio_gateway_ip xxxxxx -scaleio_gateway_password xxxxxx --scaleio_gateway_user admin
Enabling and disabling SDC authentication
PowerFlex allows authentication and authorization be enabled for all SDCs connected to a cluster. Once authentication and
authorization are enabled, older SDC clients and SDCs without a configured password will be disconnected.
The SDC procedures are not applicable for the PowerFlex management cluster.
NOTE: If SDC authentication is enabled in a production environment, data unavailability may occur if clients are not properly
configured.
Preparing for SDC authentication
Prerequisites
You will need the following information:
● Primary and secondary MDM IP address
● PowerFlex cluster credentials
Steps
1. Log in to the primary MDM.
2. Authenticate against the PowerFlex cluster using the credentials provided.
3. List and record all connected SDCs (either NAME, GUID, ID, or IP), type: scli --query_all_sdc.
4. For each SDC in your list, use the identifier you recorded to generate and record a CHAP secret, type: scli -generate_sdc_password --sdc_IP (or NAME, GUID, or ID) --reason "CHAP setup".
NOTE: This secret is specific to that SDC and cannot be reused for subsequent SDC entries.
For example, scli --generate_sdc_password --sdc_IP 172.16.151.36 --reason "CHAP setup"
116
Administering the storage
Example output:
[root@svm1 ~]# scli
--generate_sdc_password --sdc_ip 172.16.151.36 --reason "CHAP
setup"
Successfully generated SDC with
IP 172.16.151.36 password:
AQAAAAAAAAAAAAA8UKVYp0LHCDFD59BrnEXNPVKSlGfLrwAk
Configuring SDCs to use authentication
Use this procedure to configure all the SDCs for authentication.
About this task
For each SDC, you must populate the generated CHAP password. On an VMware ESXi host, this requires setting a new scini
parameter using the esxcli tool. Use this procedure to perform the configuration change. For Windows and Linux SDC hosts, the
included drv_cfg utility can be used to update the driver and configuration file in real time. An example will be given after the
VMware ESXi procedure. For Windows PowerFlex compute-only nodes, only firmware upgrades are supported.
NOTE: VMware ESXi hosts must be rebooted for the new parameter to take effect.
NOTE: This procedure is not applicable for the PowerFlex management controller 2.0.
Prerequisites
Ensure you have generated preshared secrets (passwords) for all SDCs to be configured.
Ensure you have the following information:
● Primary and secondary MDM IP address or NAMEs
● Credentials to access all SDC hosts or VMs
Steps
1. SSH to the VMware ESXi host using the provided credentials.
2. List the hosts current scini parameters esxcli system module parameters list, type -m scini | grep Ioctl.
IoctlIniGuidStr
string 10cb8ba6-5107-47bc-8373-5bb1dbe6efa3
Ini Guid, for example: 12345678-90AB-CDEF-1234-567890ABCDEF
IoctlMdmIPStr
string 172.16.151.40,172.16.152.40
Mdms IPs, IPs for MDM in same cluster should be comma separated.
To configure more than one cluster use '+' to separate between IPs.For Example:
10.20.30.40,50.60.70.80+11.22.33.44. Max 1024 characters
IoctlMdmPasswordStr
string
Mdms passwords. Each value is <ip>-<password>, Multiple passwords separated by ';'
signFor example: 10.20.30.40-AQAAAAAAAACS1pIywyOoC5t;11.22.33.44-tppW0eap4cSjsKIcMax
1024 characters
NOTE: The third parameter, IoctlMdmPasswordStr is currently empty.
3. Using esxcli, configure the driver with the existing and new parameters. For specifying multiple IP address here, use a
semi-colon (;) between the entries, as shown in the following example:
esxcli system module parameters set -m
scini -p "IoctlIniGuidStr=10cb8ba6-5107-47bc-8373-5bb1dbe6efa3
IoctlMdmIPStr=172.16.151.40,172.16.152.40 IoctlMdmPasswordStr=172.16.151.40AQAAAAAAAAA8UKVYp0LHCFD59BrnExNPvKSlGfLrwAk;172.16.152.40AQAAAAAAAAA8UKVYp0LHCFD59BrnExNPvKSlGfLrwAk"
NOTE: The spaces between the Ioctl parameter fields and the opening/closing quotes. The above is entered on a single
line.
Administering the storage
117
4. Now the SDC configuration is ready to be applied. On VMware ESXi nodes a reboot is necessary for this to happen. If the
SDC is a hyperconverged node, proceed with step 5. Otherwise, skip to step 8.
5. For hyperconverged nodes, use PowerFlex or the scli tool to place the corresponding SDS into maintenance mode.
6. If the SDS is also the cluster primary MDM, switch the cluster ownership to a secondary MDM and verify cluster state
before proceeding, type: scli --switch_mdm_ownership --mdm_name <secondary MDM name>
7. Once the cluster ownership has been switched (if needed) and the SDS is in maintenance mode, the SVM may be powered
down safely.
8. Place the ESXi host in maintenance mode. If workloads need to be manually migrated to other hosts, have those actions
performed now prior to maintenance mode being engaged.
9. Reboot the ESXi host.
10. Once the host has completed rebooting, remove it from maintenance mode and power on the SVM (if present)
11. Take the SDS out of maintenance mode (if present).
12. Repeat steps 1 through 11 for all VMware ESXi SDC hosts.
Windows and Linux SDC nodes
Windows and Linux hosts have access to the drv_cfg utility, which allows driver modification and configuration in real time. See
below for an example. The --file option allows for persistent configuration to be written to the driver's configuration file (so that
the SDC remains configured after a reload or reboot). For Windows PowerFlex compute-only nodes, only firmware upgrades are
supported.
Windows
drv_cfg --set_mdm_password --ip <MDM IP> --port 6611 --password <secret>
Linux
/opt/emc/scaleio/sdc/bin/drv_cfg --set_mdm_password --ip <MDM IP> --port
6611 --password <secret> --file /etc/emc/scaleio/drv_cfg.txt
Iterate through the relevant SDCs, using the command examples above along with the recorded information.
Enabling SDC authentication
Once the SDCs have been prepared and configured for SDC authentication, you may proceed with enabling the feature. This
procedure is not applicable for the PowerFlex management controller 2.0.
Prerequisites
Ensure all SDCs are configured with their appropriate CHAP secret. Any older or unconfigured SDC will be disconnected from
the system when authentication is turned on.
You will need the following information:
● The primary MDM IP address
● Credentials to access the PowerFlex cluster
Steps
1. SSH to the primary MDM address.
2. Log in to the PowerFlex cluster using the provided credentials.
3. Enable the SDC authentication, type: scli --set_sdc_authentication --enable
4. Verify that the SDC authentication and authorization is turned on, and the SDCs are connected with passwords, type: scli
--check_sdc_authentication_status
Example output:
[root@svm1 ~]# scli --check_sdc_authentication_status SDC authentication and
authorization is enabled.
Found 4 SDCs.
The number of SDCs with generated
password: 4
The number of SDCs with updated
password set: 4
118
Administering the storage
5. If the number of SDCs do not match, or you experience disconnected SDCs, list any or all disconnected SDCs
and then disable the SDC authentication by using the commands: scli --query_all_sdc | grep "State:
Disconnected" scli --set_sdc_authentication --disable
Recheck the disconnected SDCs to ensure they have the proper configuration applied. If necessary, regenerate their shared
secret and reconfigure the SDC. If unable to resolve SDC disconnection, leave the feature disabled and engage Dell EMC
support as needed.
Disabling SDC authentication
This procedure is not applicable for the PowerFlex management controller 2.0.
Prerequisites
Ensure all SDCs are configured with their appropriate CHAP secret. Any older or unconfigured SDC will be disconnected from
the system when authentication is turned on.
You will need the following information:
● Primary MDM IP address
● Credentials to access the PowerFlex cluster
Steps
1. SSH to the primary MDM address.
2. Log in to the PowerFlex cluster using the provided credentials.
3. Disable the SDC authentication, type: scli --set_sdc_authentication --disable
Once disabled, SDCs will reconnect automatically unless otherwise configured.
Expanding an existing PowerFlex cluster with SDC authentication enabled
Once a PowerFlex cluster has SDC authentication that is enabled, new SDCs must have the configuration step that is performed
after the client is installed. This procedure is not applicable for the PowerFlex management controller 2.0. For Windows
PowerFlex compute-only nodes, only firmware upgrades are supported.
Prerequisites
Ensure you have the following information:
● Primary MDM IP address
● Credentials for the PowerFlex cluster
● The IP address of the new cluster members
Ensure you have added the SDC authentication enabled on the PowerFlex cluster.
Steps
1. Install and add the SDCs as per normal procedures (whether using PowerFlex Manager or manual expansion process).
NOTE: New SDCs will show as Disconnected at this point, as they cannot authenticate to the system.
2. SSH to the primary MDM.
3. Log in to the PowerFlex cluster using the scli tool.
4. For each of your newly added SDCs, generate and record a new CHAP secret, type: scli --generate_sdc_password
--sdc_IP <IP of SDC> --reason "CHAP setup - expansion."
5. SSH and log in to the SDC host.
6. If the new SDC is an VMware ESXi host, follow the rest of this procedure. If Windows or Linux based, see Adding Windows
or Linux Authenticated SDCs.
7. Type -m scini | grep Ioctl and esxcli system module parameters list -m scini to list the current
scini parameters of the host.
8. Using esxcli, type esxcli system module parameters set -m scini -p to configure the driver with the existing
and new parameters.
Administering the storage
119
For example, esxcli system module parameters set
-m scini -p "IoctlIniGuidStr=09bde878-281a-4c6d-ae4f-d6ddad3c1a8f
IoctlMdmIPStr=10.234.134.194,192.168.152.199,192.168".
9. At this stage, the SDC's configuration is ready to be applied. On ESXi nodes a reboot is necessary for this to happen. If the
SDC is a hyperconverged node, proceed with step 10. Otherwise, go to step 12.
10. For PowerFlex hyperconverged nodes, use the presentation manager or scli tool to place the corresponding SDS into
maintenance mode.
11. Once the SDS is in maintenance mode, the SVM may be powered off safely.
12. Place the ESXi host in maintenance mode. No workloads should be running on the node, as we have not yet configured the
SDC.
13. Reboot the ESXi host.
14. Once the host has completed rebooting, remove it from maintenance mode and power on the SVM (if present)
15. Take the SDS out of maintenance mode (if present).
16. Repeat steps 5 through 15 for all ESXi SDC hosts.
Add a Windows or Linux authenticated SDC
Use the drv_cfg utility on a Windows or Linux machine to modify both a running and persistent configuration. Use the
following examples to perform the task on a Windows or Linux based PowerFlex node.
About this task
For Windows PowerFlex compute-only nodes, only firmware upgrades are supported.
Prerequisites
Only one IP address is required for the command to identify the MDM to modify.
Steps
1. Press Windows +R.
2. To open the command line interface, type cmd.
3. For Windows, type drv_cfg --set_mdm_password --ip <MDM IP> in the drv_cfg utility. For example:
drv_cfg --set_mdm_password --ip <MDM IP> --port 6611 --password <secret>
4. For Linux, type /opt/emc/scaleio/sdc/bin/drv_cfg --set_mdm_password --ip <MDM IP>. For example:
/opt/emc/scaleio/sdc/bin/drv_cfg --set_mdm_password --ip <MDM IP> --port 6611 --password
<secret> --file /etc/emc/scaleio/drv_cfg.txt
5. Repeat until all new SDCs are connected.
Disabling persistent checksum on medium granularity storage
pools
All medium granularity storage pools have the persistent checksum that is enabled by default PowerFlex. PowerFlex calculates
and validates the checksum value for the payload during transit to protect data-in-flight. Checksum protection is applied to all
inputs and outputs. Use the following procedures to disable the persistent checksum, if wanted, in the customer cluster.
Using PowerFlex GUI presentation server to disable persistent checksum
Use this procedure to disable the persistent checksum using PowerFlex GUI presentation server in the customer cluster.
Prerequisites
You will need the following information:
● IP address or hostname of the PowerFlex GUI presentation server
120
Administering the storage
● Valid credentials for the PowerFlex cluster
● Names of the protection domains to be worked on
● Names of the storage pools to be modified
Steps
1. Log in to the PowerFlex GUI presentation server with access to the PowerFlex cluster containing the Storage Pool you
want to modify.
2. Expand the Configuration menu in the navigation pane (underneath Dashboard), by left clicking the entry.
3. Select Storage Pools.
4. Select the check box to the left of the Storage Pool you plan to modify.
5. Click More.
6. Select Background Device Scanner.
7. Clear Enable Background Device Scanner.
8. Click Apply.
9. Click Settings.
10. Click General.
11. In the resulting dialog box, leave the box checked for Enable Inflight / Persistent Checksum.
12. Clear the Persistent option.
13. Click Apply.
14. Repeat steps 1 through 13 for all additional Storage Pools to be modified.
NOTE: The background scanner service will not be reenabled automatically once the persistent checksum is disabled. To
reenable, repeat the above steps 1 through 6 to determine that the Enable Background Device Scanner option has
not been rechecked, if wanted, recheck and click Apply.
Using the PowerFlex SCLI tool
Use this procedure to disable the persistent checksum using the command-line interface.
Prerequisites
You will need the following information:
●
●
●
●
IP address of the primary MDM
Valid credentials for the PowerFlex cluster
Names of the protection domains to be worked on
Names of the storage pools to be modified
Steps
1. SSH to the primary MDMs IP address or hostname (see other reference point for determining the primary MDM).
2. Log in to the MDM cluster using the provided credentials, type: scli --login --username admin --password
<password>.
3. Disable the background device scanner, type: scli --disable_background_device_scanner -protection_domain_name <PD> --storage_pool_name <SP>.
4. Disable the persistent checksum, type: scli --disable_persistent_checksum --protection_domain_name
<PD> --storage_pool_name <SP>
5. Verify that the checksum is disabled, and that the background device scanner has not been reenabled, type: scli -query_storage_pool --protection_domain_name <PD> --storage_pool_name <SP> | grep -I -e
persistent -e background.
6. Reenable the background device scanner (if wanted), type: scli --enable_background_device_scanner -protection_domain_name <PD> --storage_pool_name <SP> --read_error_action report_and_fix
--compare_error_action report_and_fix.
7. Verify again if wanted, type: scli --query_storage_pool --protection_domain_name <PD> -storage_pool_name <SP> | grep -i -e persistent -e background.
Administering the storage
121
7
Backing up and restoring
This section describes how to back up and restore data using PowerFlex Manager. This section also provides instructions for
backing up and restoring CloudLink Center.
Backing up and restoring using PowerFlex Manager
Use PowerFlex Manager to back up all user-created data to a remote share from which it can be restored.
You can schedule backups to run automatically or you can manually run an immediate backup. PowerFlex Manager backup files
include the following information:
● Activity logs
● Credentials
● Deployments
● Resource inventory and status
● Events
● Initial setup
● IP addresses
● Jobs
● Licensing
● Networks
● Templates
● Users and roles
● Resource module configuration files
● Performance metrics
To access the backup and restore features in PowerFlex Manager, click Settings and click Backup and Restore.
Refer to the PowerFlex Manager online help for detailed information on scheduling backups, performing an immediate backup,
and performing a restore.
Backing up and restoring CloudLink Center
Use this procedure to create a CloudLink Center backup file.
Viewing backup information
You can view the Backup page information.
To view the backup information, log in to the CloudLink Center, and click System > Backup. The Backup page lists the
following information:
●
●
●
●
●
●
Backup File Prefix—Prefix used for the backup files.
Current Key ID —The identifier for the current RSA-2048 key pair.
Current Backup File —The name of the current backup file.
Current Backup Time—The date and time that the current backup file was generated.
Backup Schedule—The schedule for generating automatic backups.
Next Backup In —The time remaining before the next automatic backup is generated.
When a backup file is downloaded, the Backup page lists the following additional information:
● Last Downloaded File —The name of the backup file that was last downloaded. Only shown when a backup file has been
downloaded.
122
Backing up and restoring
● Last Downloaded Time —The date and time of the last backup file download. Only shown when a backup file has been
downloaded.
● Backup Store —The backup store configuration type. If you have not configured a backup store, the value is Local, which is
stored on the local desktop.
You can also use the FTP or SFTP servers as backup stores. To change the backup store, click System > Backup >
Actions > Change Backup Store.
If you have configured an FTP or SFTP backup store, the following additional information is available:
○ Host —The remote FTP, SFTP, or FTPS host where you saved the CloudLink Center backups. You can set this value to
the host IP address or hostname (if DNS is configured).
○ Port—The port used to access the backup store.
○ User—The user with permission to access the backup store.
○ Directory—The directory in the backup store where backup files are available.
Changing the schedule for automatic backups
CloudLink Center automatically generates a backup file each day at midnight (UTC time).
To change the schedule for generating automatic backups, click System > Backup > Actions > Change Backup Schedule.
Generating a backup file manually
Use this procedure to generate a backup manually, if you want to preserve CloudLink Center before the next automatic back up.
Download the backup file when you generate a backup file manually.
Steps
1. Log in to the CloudLink Center.
2. Click System > Backup.
3. Click Generate New Backup. A backup file is generated.
Generating a backup key pair
You can generate a new backup key pair. For example, if the private key for a backup key pair is lost, you can generate a new
key pair. You cannot access your backup files without the associated private key. When you generate a new key pair, CloudLink
Center automatically generates a new backup file to ensure that the current backup can be opened with the private key of the
current key pair.
Dell EMC recommends the following practices when you generate a new backup key pair.
● Download the private key to the Downloads folder for the current user account. For example, C:
\Users\Admnistrator\Downloads.
NOTE: The previously generated backup key will not open the backup file created, after a new key is generated.
● Generate the new key pair by clicking System > Backup > Actions > Generate and Download New Key.
Backing up and restoring
123
Downloading the current backup file
You can download the current backup file at any time. The current backup file is either:
● The last backup file that CloudLink Center automatically created.
● The last backup file that you manually generated after the last automatic backup.
To download the backup file:
1. Click System > Backup > Actions > Download Backup.
2. In the Download Current Backup dialog box, click Download.
When you download the current backup file, CloudLink Center shows the age of the backup file.
Restore the CloudLink backup
Use this procedure to restore the CloudLink backup.
Steps
1. Log in to the CloudLink Center.
2. Click System > Backup > Actions > Restore Keystores.
3. In the Restore Keystores dialog box, complete the following steps:
124
Backing up and restoring
a.
b.
c.
d.
In the Key box, browse to the private key file.
In the Backup box, browse to the backup file.
In the Unlock box, type the passcode that was set during the initial configuration of the CloudLink Center.
Click Restore.
A Restore Keystores succeeded message is displayed.
NOTE: If the CloudLink backup is not associated with a key pair, the file is corrupted or key mismatch error message is
displayed. In such a scenario, Generate a new key pair, and Download the backup file.
Using permanent device loss
Permanent device loss (PDL) occurs when a disk fails or is removed from the vSphere host in an uncontrolled fashion.
Using permanent device loss with VMware vSphere
Because vSphere ESXi hosts have a limit of 255 disk devices per host, a device that is in a PDL state can no longer accept I/O,
but can still occupy one of the available disk device spaces. Therefore, it is better to remove the device from the host.
AutoRemoveOnPDL occurs only if there are no open handles left on the device. The auto-remove takes place when the last
handle on the device closes. If the device recovers, or if it is re-added after having been inadvertently removed, it is treated as a
new device.
Enabling and disabling permanent device loss in VMware vSphere
VMware vSphere responds differently to permanent device loss (PDL) than previous versions of VMware vSphere. Due to this
change, VMware recommends you do not disable the AutoRemoveOnPDL feature in VMware vSphere.
About this task
In VMware vSphere, the expectation is that a device in a PDL state will not return. Therefore, the device needs to be removed
from the ESXi host, before it can be recovered. If AutoRemoveOnPDL is disabled, perform a manual scan to remove the device
while it is in a PDL state.
AutoRemoveOnPDL is enabled by default in VMware vSphere.
Steps
Connect to the VMware ESXi host using the console or SSH.
Backing up and restoring
125
Option
Description
Enable AutoRemoveOnPDL
esxcli system settings advanced set -o "/Disk/AutoremoveOnPDL" -i 1
Disable AutoRemoveOnPDL
esxcli system settings advanced set -o "/Disk/AutoremoveOnPDL" -i 0
126
Backing up and restoring
8
Powering on and off
Power on a Technology Extension with Isilon Storage
Prerequisites
WARNING: The Technology Extension for Isilon Storage must be powered on before the PowerFlex rack is
powered on.
Steps
1. Power on the switches in the Isilon cabinet.
2. Power on node 1 first by pressing the Power button that is located on the back of the node that is labeled as node 1.
3. Using a serial connection, connect to the console port of node 1 with a laptop and a HyperTerminal or similar connection.
4. Monitor the status of the boot process for node 1. When node 1 has completed booting, it displays the login prompt. Note
any error codes or amber lights on the node. Resolve any issues before moving to node 2.
5. Move to node 2, power on, and monitor the boot process.
6. Repeat the procedure for each node in the cluster in sequential order.
When all nodes have completed booting, then entire cluster is powered on.
Next steps
See the relevant procedure to power on the PowerFlex rack.
Power on a PowerFlex rack
To safely power on the system, power on one PowerFlex rack component at a time, in the order specified in this procedure.
About this task
Following is the general power on workflow:
● Power on the PDUs.
● Power on the network switches.
● Power on the PowerFlex management controller 1.0.
● Power on the PowerFlex management controller 2.0
● Power on the PowerFlex Gateway VM, that is running on management cluster in VMware vCenter.
● Power on the CloudLink Center VMs that are running on management cluster in VMware vCenter or single VMware vCenter
(controller data center).
● Power on the PowerFlex GUI presentation server that is running on management cluster in VMware vCenter.
● For vCSA on 6.5 or 6.7, ensure the PSC1 and PSC2 are powered on before powering on the customer vCSA.
● Power on the UCC Edge VM if Flex on demand is in use.
● Power on the PowerFlex storage-only nodes and activate protection domains.
● Power on the PowerFlex hyperconverged nodes with VMware ESXi
● Power on all VMs on or single VMware vCenter (customer cluster VMs)
● Check PowerFlex health and rebuild status.
NOTE: Powering on must be completed in this order for the components that you have in your environment. Prioritize
and power on the PowerFlex storage-only nodes or PowerFlex hyperconverged nodes with PowerFlex Metadata Manager
(MDM) first.
Powering on and off
127
Prerequisites
● Confirm that the servers are not damaged from the shipment.
● Verify that all connections are seated properly and check the manufacturing handoff notes for any items that must be
completed.
● Verify that the following items are available:
○ Customer-provided services such as Active Directory, DNS, NTP
○ Physical infrastructure such as reliable power, adequate cooling, and core network access
See Cisco documentation for information about the LED indicators.
See Dell EMC PowerSwitch S5200 Series Installation Guide for information about the LED indicators for Dell EMC PowerSwitch
switches.
Steps
1. Verify that the PDU breakers are in the OPEN (OFF) positions. If the breakers are not OPEN, use the small enclosed tool to
press the small white tab below each switch for the circuit to open. These switches are located below the ON/OFF breaker.
2. Connect the external AC feeds to the PDUs.
3. Verify that power is available to the PDUs and a number is displayed on the LEDs of each PDU.
4. Close the PDU circuit breakers on all PDUs for Zone A by pressing the side of the switch that is labeled ON. This action
causes the switch to lie flat. Verify all the components that are connected to the PDUs on Zone A light up.
5. Close the PDU circuit breakers on all PDUs for Zone B by pressing the side of the switch that is labeled ON. Verify all the
components that are connected to the PDUs on Zone B light up.
6. Power on the network components in the following order:
NOTE: Network components take about 10 minutes to power on.
● Management switches - Wait until the PS1 and PS2 LEDs are solid green before proceeding.
● Cisco Nexus aggregation or leaf-spine switches - Wait until the system status LED is green before proceeding.
● Dell EMC PowerSwitch switches - Wait until the system status LED is solid green before proceeding.
Next steps
Power on the PowerFlex management controller.
Power on the PowerFlex management controller 2.0
Use this procedure to power on the PowerFlex management controller 2.0.
Steps
1. Power on the PowerFlex management controller:
a. Log in to the iDRACs of each PowerFlex management controller 2.0.
b. Power on the PowerFlex management controller 2.0.
c. Verify that VMware ESXi boots and that you can ping the management IP address.
Allow up to 20 minutes for the PowerFlex management controller 2.0 to boot after VMware ESXi loads.
2. Exit maintenance mode on all of the PowerFlex management controller 2.0:
a. Log in to the VMware ESXi hosts on the PowerFlex management controller 2.0.
b. Exit maintenance mode on the PowerFlex management controller 2.0.
3. Power on the PowerFlex management controller SVMs:
a. Log in to the VMware ESXi hosts on the PowerFlex management controller 2.0.
b. Click Virtual Machines.
c. Power on the SVM.
4. Activate the protection domain:
NOTE: See the Logical Configuration Survey (LCS) for the primary and secondary MDMs. The known primary MDM is
required to activate the protection domain.
a. Log in to the primary MDM.
128
Powering on and off
b. Type scli --activate_protection_domain --protection_domain_name <NAME>) to activate the
protection domain.
c. Type scli --query_protection_domain --protection_domain_name <NAME> to verify that the
operational state of the protection domain is active.
5. Rescan the storage devices for datastores:
a. Log in to the VMware ESXi hosts on the PowerFlex management controller 2.0.
b. Click Storage > Devices > Rescan.
6. Power on the vCSA, DNS, and jump server VMs for the PowerFlex management controller 2.0:
a. Log in to PowerFlex management controller A through the VMware host client and verify that the DNS, VMware vCenter
and jump VM, and VMs are installed. If the components are not on the PowerFlex management controller A, verify that
the components exist on PowerFlex management controller B or C and power them on.
b. Verify that the VMs started and have network connectivity.
c. Log in to the vCSA.
d. Power on the remaining management VMs in the following order:
i.
ii.
iii.
iv.
v.
vi.
PowerFlex management controller gateway
PowerFlex Manager
CloudLink Center VMs (if applicable)
PowerFlex GUI presentation server
vCSA for the customer (if applicable)
Secure Remote Services
e. Verify the following for HA, DRS, and the affinity rules:
●
●
●
●
Log in to the VMware vSphere Client and browse to the cluster.
Click Configure.
Under Services, verify that the vSphere DRS and vSphere Availability are on.
Under Configuration, verify that the VM and the host rules are added.
Power on the PowerFlex management controller 1.0
Power on the PowerFlex management controller 1.0.
Steps
1. Power on the PowerFlex management controller, verify VMware ESXi boots, and that you can ping the management IP
address. Allow up to 20 minutes for the PowerFlex management controller to boot after VMware ESXi loads.
2. To access each PowerFlex management controller command line and exit maintenance mode, type:
esxcli system maintenanceMode set -–enable false
3. Log in to PowerFlex management controller A through the VMware host client and verify that the DNS, VMware vCenter,
Jump VM, and Cloudlink Center VMs are installed. If the components are not on PowerFlex management controller A, verify
that the components exist on PowerFlex management controller B or C and power them on.
NOTE: External PSCs are not available for a single VMware vCenter environment. Only embedded PSCs are available
for a single VMware vCenter. Log in to the PowerFlex management controller through VMware host client, and verify
the VMs on the host are in power on state.
4. Power on the DNS (if not connected to the customer network) and the VMware vCSA VM. Verify that they have started
completely and have network connectivity.
5. Log in to the VMware vSphere Client and perform a vSAN health check. Verify that no components are resyncing. Complete
the following steps:
a. Go to the vSAN cluster in the VMware vSphere Client.
b. Click the Monitor tab and click vSAN.
c. Select Health to review the vSAN health check categories.
NOTE: If the Test Result column displays Warning or Failed, expand the category to review the results of
individual health checks.
d. From the vSAN menu, select Resyncing Objects and verify that the list is empty.
Powering on and off
129
6. Power on the remaining management VMs in the following order:
●
●
●
●
●
●
●
Jump server
PowerFlex Gateway
PowerFlex Manager
CloudLink Center VMs
PowerFlex GUI presentation server
For vCSA on 6.5 or 6.7, ensure the PSC1 and PSC2 are powered on before powering on the customer vCSA.
If there is a separate vCSA for the controller and the customer, power on the customer vCSA.
7. Verify the following for HA, DRS, and affinity rules:
a.
b.
c.
d.
Log in to the VMware vSphere Client and browse to the cluster.
Click the Configure tab.
Under Services, verify that the vSphere DRS and vSphere Availability are on.
Under Configuration, verify that the VM/Host Rules are added.
8. If vCenter High Availability is in use, do the following:
a. Power on the peer and witness VMs and enable them to fully boot.
b. Select Single vCenter > Configure > vCenter HA and verify that the Active, Passive, and Witness nodes display as up
and green.
c. Click Edit, select Enable vCenter HA and click OK.
Power on the VMware NSX-T Edge nodes
Use this procedure to turn on the VMware NSX-T Edge nodes.
Steps
1. Power on the VMware NSX-T Edge nodes.
2. Verify that VMware ESXi has booted and you can ping the management IP address.
3. Power on the VMware NSX-T Edge VMs.
Power on PowerFlex storage-only nodes
Use this procedure to power on PowerFlex storage-only nodes.
Steps
1. From iDRAC, power on all PowerFlex storage-only nodes and allow them time to boot completely.
NOTE: Perform steps 2 to 6 only for a PowerFlex storage-only node cluster, and where the MDM is part of
the PowerFlex storage-only node. Do not perform steps 2 to 7 when the PowerFlex storage-only node is part of
hyperconverged environment. Activation of PD is included as part of power on PowerFlex hyperconverged node.
2. Log in to the PowerFlex GUI presentation server:
a.
b.
c.
d.
e.
Click Configuration > SDS. Verify all the SDSs are healthy.
Click the devices and verify all the devices are healthy.
Click the protection domain. For each protection domain, under More, select Activate.
Verify that there are no errors, warnings, or alerts on the system.
Exit the presentation server.
3. Log in to all host iDRACs as root and confirm the NTP, syslog, and SNMP settings.
4. Use SSH to connect to all network switches.
5. To verify that connected interfaces are not in a not connected state, type:
show interface status
6. Do the following:
a. Log in to the jump server on the controller stack.
b. At the command prompt or the terminal, type nslookup, against the correct DNS and verify that the DNS is correct.
130
Powering on and off
For example, nslookup eagles-r640-f-158.lab.vce.com (node hostname) 10.234.134.100 (dns
server ip address).
Power on all PowerFlex hyperconverged nodes
Use this procedure to power on PowerFlex hyperconverged nodes with VMware ESXi.
Steps
1. From iDRAC, power on all PowerFlex hyperconverged nodes with VMware ESXi and allow them time to boot completely.
2. Log in to the VMware vSphere Client if the PowerFlex rack includes VMware vSphere.
3. Take each PowerFlex hyperconverged node with VMware ESXi out of maintenance mode.
4. If the PowerFlex rack is a full VMware ESXi deployment, power on the MDM cluster PowerFlex VMs, primary, two
secondaries, and two tiebreakers.
5. Log in to the PowerFlex GUI presentation server using the primary MDM IP address.
NOTE: To identify the primary MDM, log in to the SVM and run scli --query_cluster. The SVM that can resolve
the query is the primary MDM. The primary MDM details are available from query output.
a. Click Configuration > SDSs.Verify that all the SDSs are healthy.
b. From Configuration > Devices, click Devices and verify that the devices are online.
c. Verify that asynchronous replication is enabled:
● Click Protection > SDRs. Verify that the SDRs are healthy.
● Click Protection > Journal Capacity. Ensure that the journal capacity has already been added.
d. Click the protection domain. For each protection domain, under More, select Activate.
e. Verify that there are no errors, warnings, or alerts on the system.
f. Exit the PowerFlex GUI presentation server.
Power on all PowerFlex compute-only nodes
Use this procedure to power on PowerFlex compute-only nodes with VMware ESXi or Windows Server 2016 or 2019.
About this task
For Windows PowerFlex compute-only nodes, only firmware upgrades are supported.
Steps
1. From iDRAC, power on all PowerFlex compute-only nodes and allow them time to boot completely.
2. For PowerFlex compute-only nodes with VMware ESXi:
a. Log in to the VMware vSphere Client if the PowerFlex rack includes VMware vSphere.
b. Take each PowerFlex compute-only node with VMware ESXi out of maintenance mode.
3. For PowerFlex compute-only nodes with Windows Server 2016 or 2019:
a. After the Windows compute-only node boots successfully, log in to the Windows Server 2016 or 2019 system from
Remote Desktop with administrator privilege.
b. Confirm if the mapped PowerFlex volumes are online and accessible using the disk management tool using Windows+R
to open Run. Type diskmgmt.msc in the box and press Enter.
c. Confirm if all the critical services are up and running by pressing Windows+R to open Run. Type Services.msc in the
box and press Enter.
Powering on and off
131
Complete the powering on of PowerFlex rack
Complete the power on process for PowerFlex compute-only nodes with VMware ESXi.
Steps
1. From vCenter, power on the remaining VMs of all PowerFlex compute-only nodes with VMware ESXi.
2. From the VMware vSphere Client:
a. Rescan to rediscover datastores.
b. Mount the previously unmounted datastores, and add any missing VMs to the inventory.
c. Power on the remaining VMs.
3. For VMware vSphere, enable HA, DRS, and affinity rules.
4. Delete expired or unused CloudLink Center licenses from PowerFlex Manager using the following commands:
a.
b.
c.
d.
Log in to PowerFlex Manager.
Click Settings > Software Licenses.
Select the license to delete and click Delete.
Go to the resource, select the CloudLink VMs, and click Run the inventory.
Power off a PowerFlex rack
To safely power off the PowerFlex rack, power off one component at a time, in the order specified in this procedure.
About this task
Following is the general power off workflow:
● Check PowerFlex health, and rebuild status
● Power off all VMs on VMware vCenter
● Power off the UCC Edge VM if Flex on demand is in use
● Deactivate PowerFlex protection domains (both source and destination protection domains if asynchronous replication is
enabled)
● Power off PowerFlex compute-only nodes with VMware ESXi
● Power off PowerFlex hyperconverged nodes with VMware ESXi
● Power off the CloudLink Center VMs that are running on management vCenter
● Power off PowerFlex compute-only nodes with Windows Server 2016 or 2019
● Power off VMware NSX-T Edge nodes
● Power off the PowerFlex management controller 1.0
● Power off the PowerFlex management controller 2.0
● Power off PDUs
NOTE: Powering off must be completed in this order for the components that you have in your environment.
Identify node types in the PowerFlex GUI presentation server:
● For PowerFlex hyperconverged nodes, click Configuration > SDC or Configuration > SDS.
● For PowerFlex compute-only nodes with VMware ESXi, click Configuration > SDC.
Prerequisites
To facilitate powering on the PowerFlex rack later, document the location of the management infrastructure VMs on their
respective hosts. Also, verify that all startup configurations for the Cisco and Dell EMC devices are saved.
See the Dell EMC VxBlock TM System 1000 and PowerFlex Rack Physical Planning Guide for information about power
specifications.
Steps
1. Check PowerFlex health and rebuild status:
a. Launch the PowerFlex GUI and log in to the primary PowerFlex MDM.
b. Verify the PowerFlex system is healthy and no rebuild or rebalances are running.
132
Powering on and off
2. Shut down all VMs on the vCenter:
a. Using the VMware vSphere Client, log in to the customer vCenter or a single vCenter (customer cluster).
b. Expand the customer clusters.
c. Shut down all VMs, except for the PowerFlex Storage VMs (SVM).
CAUTION: Do not shut down the SVMs. Shutting them down now can result in data loss.
Power off protection domains using PowerFlex GUI
Use this procedure to power off the PowerFlex protection domains and PowerFlex storage-only nodes using PowerFlex GUI.
Steps
1. Log in to the PowerFlex GUI presentation server.
2. Select Configuration > Protection Domain.
3. For each protection domain, click MORE > Inactivate.
4. Click OK and type the administrator password when prompted. Repeat for each protection domain and verify that each is
deactivated.
5. Repeat for each protection domain and verify that each is deactivated.
6. Exit the PowerFlex GUI presentation server.
7. Using iDRAC, power off the PowerFlex storage-only node.
Power off protection domains using a PowerFlex
version prior to 3.5
Use this procedure to power off the PowerFlex protection domains and PowerFlex storage-only nodes using a PowerFlex
version prior to 3.5.
Steps
1. From the PowerFlex GUI, click Backend/Storage and change the view to By SDS.
2. For each protection domain, select Inactivate Protection Domain.
3. Click OK and type the administrator password when prompted. Repeat for each protection domain and verify that each is
deactivated.
4. Repeat for each protection domain and verify that each is deactivated.
5. Exit the PowerFlex GUI.
6. Using iDRAC, power off the PowerFlex storage-only node.
Power off PowerFlex compute-only nodes with
VMware ESXi
Use this procedure to power off PowerFlex compute-only nodes with VMware ESXi.
Steps
1. From Home, click Hosts and Clusters.
2. Disable DRS and HA on the customer cluster.
3. Place the PowerFlex compute-only nodes with VMware ESXi into maintenance mode.
4. Power off the PowerFlex compute-only nodes with VMware ESXi.
Powering on and off
133
Power off PowerFlex hyperconverged nodes with
VMware ESXi
Use this procedure to power off PowerFlex hyperconverged nodes with VMware ESXi.
Steps
1. Log in to the VMware vSphere Client.
2. From VMware vCenter, click Home > Hosts and Clusters.
3. Verify that DRS and HA on the customer cluster are disabled. If they are not disabled, disable them.
4. Shut down all PowerFlex SVMs (including the five MDM clusters) and the PowerFlex Gateway VM.
5. Place the PowerFlex hyperconverged nodes into maintenance mode.
6. Power off the PowerFlex hyperconverged nodes with VMware ESXi.
Power off PowerFlex compute-only nodes with
Windows Server 2016 or 2019
Use this procedure to power off PowerFlex compute-only nodes with Windows Server.
About this task
For Windows PowerFlex compute-only nodes, only firmware upgrades are supported.
Steps
1. Connect to the Windows Server 2016 or 2019 system from the Remote Desktop with an account set up with an
administrator privilege.
2. Power off through any of the following modes:
● GUI: Click Start > Power > Shut down.
● Command line using PowerShell: Run the Stop-Computer cmdlet.
Power off the VMware NSX-T Edge nodes
Use this procedure to power off the VMware NSX-T Edge nodes.
Steps
1. In the management VMware vCenter, right-click the NSX-T Edge VMs and click Power > Shut Down Guest OS.
2. In the management VMware vCenter, right-click the NSX-T Edge nodes and click Power > Shut Down.
Power off the PowerFlex management controller 2.0
Power off the PowerFlex management controller 2.0 on each of the PowerFlex management controller ESXi hosts.
Steps
1. Determine the primary MDM IP and the protection domain name:
a. Log in to PowerFlex Manager to determine the primary MDM.
b. To view the details of a service, select the component. Scroll down on the Service Details page, the following
information is displayed based on the resource types in the service:
● Primary MDM IP
● Protection Domain
134
Powering on and off
2. Power off the VMs except for the PowerFlex SVMs:
a. Log in to the PowerFlex management controller 2.0 ESXi hosts.
b. Click Virtual Machines.
c. Power off all the VMs, except the PowerFlex SVMs.
3. Inactivate the protection domain:
a. Log in to primary MDM.
b. Type scli --inactivate_protection_domain --protection_domain_name <NAME>) to inactivate the
protection domain.
c. Enter Y to confirm.
d. Type scli --query_protection_domain --protection_domain_name <NAME> to verify that the
operational state of the protection domain is inactive.
4. Power off the PowerFlex management controller 2.0:
a. Log in to each of the PowerFlex management controller ESXi hosts.
b. Click Virtual Machines.
c. Power off the PowerFlex SVM.
5. Enter maintenance mode for the PowerFlex management controller 2.0.
NOTE: Put the PowerFlex management controller 2.0 in maintenance mode.
a. Log in to the PowerFlex management controller 2.0 ESXi hosts.
b. Place each host in maintenance mode.
6. Power off the PowerFlex management controller 2.0:
a. Log in to the PowerFlex management controller 2.0 ESXi hosts.
b. Power off the PowerFlex management controller 2.0.
c. Verify that the hosts are shut down using the iDRAC.
Power off the PowerFlex management controller 1.0
Use this procedure to power off the PowerFlex management controller.
Steps
1. Using the VMware vSphere Client, shut down all VMs running on the vSAN cluster, except the PowerFlex management
controller VMware vCenter, Temp DNS Server 1, and the jump server.
NOTE:
● If you power off the PowerFlex Manager VM while a job (such as a service deployment) is still in progress, the job is
not completed successfully.
● For the single vCenter environments, do not power off the vCSA VM running on the vSAN Cluster.
2. Migrate the PowerFlex management controller vCenter, Temp DNS Server 1, jump server.
3. Select VXRC cluster, disable HA, disable the affinity rules for the system, and set DRS to manual mode.
NOTE: For a single vCenter environment, the cluster name is PowerFlex RC cluster.
4. To show a report of all vSAN objects in the cluster, from the VMware ESXi host in the cluster, type:
esxcli vsan health cluster list
5. Verify that no VSAN components are resyncing. Verify that the VSAN is healthy and there are no alerts or error messages.
From the VMware vCenter Web Client, run the vSAN Health Object health check and ensure that there are no inaccessible
objects. Verify that all objects are in a Healthy state before proceeding.
a. Go to the vSAN cluster in the VMware vSphere Client.
b. Click the Monitor tab and click vSAN.
c. Select Health to review the vSAN health check categories.
Powering on and off
135
NOTE: If the Test Result column displays Warning or Failed, expand the category to review the results of
individual health checks. Correct any issues before proceeding.
6. If vCenter HA is enabled on the PowerFlex management controller vCenter or single vCenter, select Single vCenter >
configure > vCenter HA, click Edit, select the Maintenance Mode radio button and click OK.
7. Shut down the peer and witness VMs.
8. Shut down the PowerFlex management controller vCenter Server VM, Temp DNS Server 1, and jump server simultaneously
using the VMware vSphere Client.
9. To place each PowerFlex management controller into maintenance mode with the ESXCLI using SSH, starting with
PowerFlex management controller C, B, and then A, type:
esxcli system maintenanceMode set --enable true --vsanmode noAction
10. When the PowerFlex management controller hosts are in maintenance mode, shut them down by typing:
esxcli system shutdown poweroff --reason powerflexrack-shutdown.
Complete the powering off of PowerFlex rack
Use this procedure to complete the powering off of your PowerFlex rack.
Prerequisites
NOTE: Ensure that you complete a back-up before powering off.
Steps
1. Connect to all the switches using SSH:
● For Cisco Nexus switches, type copy running-config startup-config
● For Dell EMC PowerSwitch switches, type copy running-config tftp://hostip/filepath.
2. On Zone B (BLUE), turn off all PDU power breakers (OPEN position).
3. On Zone A (RED), turn off all PDU power breakers (OPEN position).
4. To verify that there is no power beyond the PDUs, disconnect the AC feeds to all PDUs.
Power off a Technology Extension with Isilon Storage
Prerequisites
●
WARNING: The Technology Extension for Isilon Storage must be powered off after the PowerFlex rack is
powered off.
● See the relevant procedure in this publication for powering off the attached PowerFlex rack.
● Look at the back panel and confirm that the LEDs on both batteries are green. If either battery is red, it has failed and must
be removed from the node. If both batteries are red, replace and verify them before shutting down the node.
● Take a backup switch configuration using below command and store the backup file to a remote server:
Copy running-config startup-config
copy startup-config tftp://<server-ip>/<switch_backup_file_name>
Steps
1. Open OneFS and log in as root.
2. Click CLUSTER MANAGEMENT > Hardware Configuration > Shutdown & Reboot Controls.
3. Select Shut Down.
4. Click Submit.
5. Power off the Switches in the Technology Extension for Isilon Storage cabinet.
136
Powering on and off
Results
Verify that all nodes have shut down by looking at the power indicators on each node.
If nodes do not power off:
1. SSH to the node.
2. Log in as root and type:
Isi config
3. In the subsystem, type:
shutdown #
Where # represents the node number, or type:
shutdown all
If the node still does not power off, you can force the node to power off by pressing and holding the multifunction/power
button on the back of the node.
If the node still does not respond, press Power button of the node three times, and wait five minutes. If the node still does not
shut down, press and hold Power button until the node powers off.
NOTE: Perform a forced shutdown only with a failed and unresponsive node. Never force a shutdown with a healthy node.
Do not attempt any hardware operations until the shutdown process is complete. The process is complete when the node
LEDs are no longer illuminated.
Powering on and off
137
9
PowerFlex rack password management
This section contains instructions for modifying the default administrator and/or root accounts and passwords for PowerFlex
rack components.
Default accounts and passwords are configured by the vendor or during the initial Dell EMC manufacturing build. Such accounts
and passwords, if not changed, could be used to compromise the system in production. Modifying the default passwords
enhances your security and facilitates the handing over credentials modification from Dell EMC manufacturing.
Updating passwords for system components
You can update the passwords for some system components from PowerFlex Manager.
Steps
1. Log in to PowerFlex Manager.
2. On the menu bar, click Resources.
3. On the All Resources tab, select one or more resources of the same type for which you want to change passwords.
For example, you could select one or more iDRAC nodes or you could select one or more PowerFlex Gateway components.
4. Click Update Password.
PowerFlex Manager displays the Update Password wizard.
5. On the Select Components page, select one or more components for which you want to update a password and click
Next.
The component choices vary depending on which resource type you initially selected on the Resources page.
6. On the Select Credentials page, create a credential or change to a different credential having the same username.
7. Click Finish and click Yes to confirm the changes.
Updating passwords for nodes
You can update the passwords for one or more nodes from PowerFlex Manager.
Steps
1. Log in to PowerFlex Manager.
2. On the menu bar, click Resources.
3. On the All Resources tab, select one or more nodes for which you want to change the passwords.
4. Click Update Password.
PowerFlex Manager displays the Update Password wizard.
5. On the Select Components page, specify which passwords you want to update for the selected nodes by clicking one or
more of the following check boxes.
● iDRAC Password
● Node Operating System Password
● SVM Operating System Password
6. Click Next.
7. On the Select Credentials page, create a credential with a new password or change to a different credential.
a. Open the iDRAC ( n ) object under the Type column to see details about each node you selected on the Resources
page.
b. To create a credential that has the new password, click the plus sign (+) under the Credentials column.
138
PowerFlex rack password management
Specify the Credential Name and the User Name for which you want to change the password. Enter the new password
in the Password and Confirm Password fields.
c. To modify the credential, click the pencil icon for the nodes under the Credentials column and select a different
credential.
d. Click Save.
You must perform the same steps for the node operating system and SVM operating system password changes. For a node
operating system credential, only the OS Admin credential type is updated.
8. Click Finish.
9. Click Yes to confirm.
Results
PowerFlex Manager starts a new job for the password update operation, and a separate job for the device inventory. The
node operating system and SVM operating components are updated only if PowerFlex Manager is managing a cluster with the
operating system and SVM. If PowerFlex Manager is not managing a cluster with these components, these components are not
displayed and their credentials are not updated. Credential updates for iDRAC are allowed for managed and reserved nodes only.
Unmanaged nodes do not provide the option to update credentials.
Updating passwords for PowerFlex Gateway components
You can update the passwords for one or more PowerFlex Gateway components from PowerFlex Manager.
Steps
1. Log in to PowerFlex Manager.
2. On the menu bar, click Resources.
3. On the All Resources tab, select one or more PowerFlex Gateway components for which you want to change the
passwords.
4. Click Update Password.
PowerFlex Manager displays the Update Password wizard.
5. On the Select Components page, select PowerFlex Password.
6. Click Next.
7. On the Select Credentials page, create a credential with a new password or change to a different credential.
a. Open the PowerFlex ( n ) object under the Type column to see details about each gateway you selected on the
Resources page.
b. To create a credential that has the new password, click the plus sign (+) under the Credentials column.
Specify the Credential Name, as well as the Gateway Admin User Name and Gateway OS User Name for which you
want to change passwords. Enter the new passwords for both users and confirm these passwords.
c. To modify the credential, click the pencil icon for one of the nodes under the Credentials column and select a different
credential.
d. Click Save.
8. Click Finish.
9. Click Yes to confirm.
Results
PowerFlex Manager starts a new job for the password update operation, and a separate job for the device inventory. If
PowerFlex Manager is managing a cluster for any of the selected PowerFlex Gateway components, it updates the credentials
for the Gateway Admin User and Gateway OS User, as well as any related credentials, such as the LIA and lockbox
credentials. If PowerFlex Manager is not managing the cluster, it only updates the credentials for the Gateway Admin User and
Gateway OS User.
PowerFlex rack password management
139
Update a credential in PowerFlex Manager
If the password for a resource or component is manually changed outside of PowerFlex Manager, you must update the
credential in PowerFlex Manager with the new password.
About this task
If you change the password for a resource, you must update all similar resources (for example, element manager, node,
switches, VMware vCenter, PowerFlex Gateway, or presentation server) to have the same password.
Steps
1. Log in to PowerFlex Manager.
2. On the menu bar, click Settings > Credentials Management.
3. On the Credential Management page, select the resource whose password you want to edit, and click Edit.
4. In the Edit Credentials dialog box, modify the password.
5. Click Save.
6. On the menu bar, click Services and select the deployed service that contains the resource whose password is updated.
7. Under Service Actions, click Update Service Details.
Updating passwords outside of PowerFlex Manager
Use these procedures to change passwords in system components.
If you are using PowerFlex Manager, you need to also change the credential in PowerFlex Manager after changing the password
for a resource. See Update a credential in PowerFlex Manager.
Compute
Changing the PowerFlex node password
Use the iDRAC web console to change the root password.
Steps
1. Open a web browser and type: http://<ip_address_of_iDRAC> .
2. Log in as root.
3. Expand iDRAC Settings.
4. Click Users > Local Users.
5. Select User ID 2 and click Edit.
ID 2 is the root user.
6. Under User Configuration, in the User Account Settings section, change the password.
a. In the Password box, enter the new password.
b. In the Confirm Password box, re-enter the new password.
7. Click Save.
Changing the system and setup passwords
Use this procedure to change the system BIOS password.
Prerequisites
If the password status is set to locked, you cannot change or delete the system or setup password.
140
PowerFlex rack password management
Steps
1. Enter System Setup by pressing F2 immediately after turning on or restarting your system.
2. On the System Setup Main Menu, click System BIOS > System Security.
3. On the System Security screen, ensure that Password Status is set to Unlocked.
4. In the System Password field, change or delete the existing system password and press Enter.
5. In the Setup Password field, change or delete the existing setup password and press Enter.
If you change the system or setup password, a message prompts you to re-enter the password. If you delete a password, a
message prompts you to confirm the deletion.
6. Press Esc to return to the System BIOS screen. Press Esc again. A message prompts you to save the changes.
Changing the user passwords
Use this procedure to change user passwords for Red Hat Enterprise Linux or embedded operating system.
Steps
1. Log in to the appropriate user account. If changing a password for another user, log in as root.
2. Open a shell prompt and type one of the following commands:
● To change your own password, type: passwd
● To change password of another user, type: passwd username
3. When prompted, enter and confirm the new password.
Changing the user password for Windows-based PowerFlex compute-only
node
Use this procedure to change user passwords for Windows-based PowerFlex compute-only node in a Windows Server 2016 or
2019 operating system.
About this task
For Windows PowerFlex compute-only nodes, only firmware upgrades are supported.
Steps
1. Log in to the Windows Server 2016 or 2019 system from Remote Desktop with an account with administrator privilege.
2. On the taskbar, click Start, type administrative tools in the search text box and click Enter.
3. From the list of administrative tools, double-click Computer Management.
4. In the left-panel of the Computer Management window, go to Local User and Groups > Users, where you can view the
user account details.
5. Right-click an account, and select Set Password....
6. Click Proceed to complete the password resetting process.
Storage
Changing the operating system password
If you lose the password on Storage VM (SVM), start up in recovery mode to reset it.
Steps
1. Log in to the PowerFlex GUI presentation server. For a PowerFlex version prior to 3.5, open the PowerFlex GUI.
2. Select the SVM, and click Enter Maintenance Mode.
3. Select Protected, and click Enter Maintenance Mode.
PowerFlex rack password management
141
4. Log in to VMware vCenter. Select the cluster and click Related Objects > Virtual Machines > Open Console.
5. Restart the SVM.
6. Enter ESC during restart and click OK to enter text mode.
NOTE: You might have to hold down the ESC key.
7. Type E to edit the password.
8. Select the second line (starts with kernel /boot…) and type E.
9. Append init=/bin/bash to the end of the line.
10. Select B to restart in recovery mode.
11. After the PowerFlex node restarts, type passwd, and enter a new password.
12. Type reboot, and allow the SVM to restart using the new password.
PowerFlex default accounts
PowerFlex has the following default accounts:
Account
Details
PowerFlex Installer admin user
Lets the user issue installation commands in the PowerFlex Installer web client. Admin
creates the password at the start of installation.
SVM root user
Provides full administrator privileges to all configurations.
MDM admin user
A superuser account that provides full administrator privileges to all configuration and
monitoring activities through the CLI and GUI.
CAUTION: While deploying the system, PowerFlex Manager sets the same password for the PowerFlex Gateway
admin account, PowerFlex Gateway lockbox, MDMs, and LIAs. If you change the PowerFlex Gateway admin
password, you must also change these passwords in the PowerFlex Gateway and the nodes, to ensure PowerFlex
Manager can manage these components.
When you change the default account passwords, passwords must meet the following criteria:
● Between 6 and 31 characters
● Include at least three of the following groups:
○ [a-z]
○ [A-Z]
○ [0-9]
○ Special characters (!@#$...)
● No white spaces
NOTE: You might need to reconfigure the Secure Remote Services connectivity after changing the passwords.
Changing the Installation Manager admin password
Use the PowerFlex FOSGWTool to reset the gateway admin password.
About this task
For Windows PowerFlex compute-only nodes, only firmware upgrades are supported.
The following table shows the path to FOSGWTool:
PowerFlex Gateway installed on
Location of FOSGWTool
Linux
/opt/emc/scaleio/gateway/bin/FOSGWTool.sh
Microsoft Windows
C:\Program Files\EMC\ScaleIO\Gateway\bin\FOSGWTool.bat
The following table shows the location of the gatewayUser.properties file:
142
PowerFlex rack password management
PowerFlex Gateway installed on
Location of gatewayUser.properties
Linux
/opt/emc/scaleio/gateway/webapps/ROOT/WEB-INF/classes
Windows
C:\Program Files\EMC\ScaleIO\Gateway\webapps\ROOT\WEBINF\classes\
Steps
1. Use FOSGWTool to reset the password by typing the following command:
FOSGWTool --reset_password --password <new_gateway_password>
--config_file gatewayUser.properties
2. Restart the scaleio-gateway service.
Changing the Light Installation Agent password
Light Installation Agent (LIA) establishes trust with the Installation Manager through a configurable token. The LIA token is
stored in /opt/emc/scaleio/lia/cfg/conf.txt on each SVM. The password is set at initial installation.
About this task
During installation, the IM password and the LIA token are stored in hashed format.
Prerequisites
To change the token after LIA runs, you must change the line in the configuration file and restart LIA.
Steps
1. Open the conf.txt LIA file (/opt/emc/scaleio/lia/cfg/).
2. Delete the lia_token hashed string.
3. Enter a new lia_token string in plaintext.
For security reasons, Dell EMC recommends using the following complexity: password length 6 to 31 characters, and include
at least three of the following groups: [a-z], [A-Z], [0-9], and special characters (!@#$...). Blank spaces are not enabled.
4. Save your changes and restart the LIA service.
a. Go to /opt/emc/scaleio/lia/bin/.
b. Run delete_service.sh.
c. Run create_service.sh.
This rehashes the new plaintext string.
Changing the SVM root password
Steps
1. Use SSH to connect to the PowerFlex SVM and log in as root.
2. Type passwd and press Enter.
3. Type the new password and press Enter.
4. Retype the password and press Enter.
Changing the default admin MDM password
Steps
1. Use SSH to log in as root user.
2. Type the following command: # scli --login --username admin.
PowerFlex rack password management
143
3. Type the following command: # scli --set_password.
4. Type the old password and press Enter.
5. Type the new password and press Enter.
6. Retype the new password and press Enter.
PowerFlex user-defined accounts
You can add extra user accounts to the PowerFlex MDM. You must assign a user role.
User role
Query
Configuration parameters
Configuration user credentials
Monitor
Yes
No
No
Configurator
Yes
Yes
No
Backend Configurator
Yes
Yes (backend operations only)
No
Frontend Configurator
Yes
Yes (frontend operations only)
No
Administrator
Yes
Yes
Can configure configurator and
monitor users
Security Role
No
No
Can define administrator users and
control LDAP
Super User
NOTE: Only one Super
User is allowed per
system and it must be a
local user.
Yes
Yes
Yes
Adding a user
To
1.
2.
3.
add a user:
Use SSH to log in as root user.
Type the following command: # scli --login --username admin
Type the following command: scli --add_user --username <NAME> --user_role <Monitor|Configure|
Administrator>
Modifying a user
To
1.
2.
3.
modify a user:
Use SSH to log in as root user.
Type the following command: # scli --login --username admin
Type the following command: scli --modify_user --username <NAME> --user_role <Monitor|
Configure|Administrator> .
Deleting a user
To
1.
2.
3.
delete a user:
Use SSH to log in as root user.
Type the following command: # scli --login --username admin
Type the following command: scli --delete_user --username <NAME>
Displaying users and roles
To display users and roles:
1. Use SSH to log in as root user.
144
PowerFlex rack password management
2. Type the following command: # scli --login --username admin
3. Type the following command: scli --query_users
4. Type the following command: scli --query_user --user_id <ID> | --username <NAME>
Generating a hashed password
To generate a hashed password for an MDM user on the PowerFlex Gateway, type the following command:
im generate_mdm_password --mdm_password <PASSWORD> --config_file /opt/emc/scaleio/gateway/
webapps/ROOT/WEB-INF/classes/gatewayUser.properties file.
Virtualization
Changing a VMware ESXi host root password
For security reasons it might be necessary to change the password for the root user on a VMware ESXi host after installation.
Use any of the following methods to change the root password for the VMware ESXi host:
● VMware vSphere Client
● VMware ESXi shell command
● VMware ESXi host System Customization menu
Changing the password using VMware vSphere Client
Prerequisites
Log in to the VMware ESXi host service console as root user.
Steps
1. Log in to VMware vSphere Client.
2. Click Home > Inventory.
3. In the left pane, select the VMware ESXi server name or IP address. Tabs for the server are displayed in the right pane.
4. Select the Local Users & Groups tab.
5. Double-click the root user.
6. Select Change password.
7. In the Edit User - root dialog box, enter and confirm a new password.
8. Click OK.
Changing the password using the VMware ESXi shell command
Prerequisites
Log in to the VMware ESXi host service console as root user.
You can also acquire root privileges by running the su command.
Steps
1. When prompted, enter the current password.
2. To change the root password, enter: passwd root.
3. Enter the new root password and press Enter.
4. Verify the password by entering it again.
PowerFlex rack password management
145
Changing the password using the VMware ESXi host System Customization menu
Prerequisites
Log in to the VMware ESXi host service console as root user.
You can also acquire root privileges by executing the su command.
Steps
1. From the System Customization menu of the VMware ESXi host, use the keyboard arrows to select Configure Password
and press Enter.
2. In the Configure Password dialog box, fill in the required fields to change the password:
a. Enter the Old Password of the VMware ESXi host.
b. Enter the new root password in the New Password field. Re-type it in the Confirm Password field.
c. Press Enter.
Modifying the VMware vCenter Server Single Sign On default administrator
password
Use this procedure to change the default password for the VMware vCenter Single Sign On administrator account.
Prerequisites
Log in to the VMware vSphere Client and connect to vCenter.
Access the VMware vSphere Client using either of the following methods:
● Open the browser and enter the following URL: https://vcenterip:9443/vsphere-client
● From the Start menu, choose All Programs > VMware > VMware vSphere Webclient.
Steps
1. In the left pane, select Administration.
2. Under Administration, select SSO Users and Groups.
The admin user displays in the right pane.
3. On the Users tab, right-click the administrator user.
4. Set and confirm the password for the admin user account. Be sure to use a strong password as the system validates the
password before accepting it.
5. Click OK.
Changing a VMware vCenter Single Sign On password
Users in the local domain, vsphere.local by default, can change their VMware vCenter Single Sign On passwords from a
Web interface. Users in other domains change their passwords following the rules for that domain.
About this task
NOTE: For the environment with single vCenter with embedded PSC, only one default domain name (vsphere.local) will
exist. Use this procedure for the environment with separate vCenters for customer (with external PSCs), and controller
vCenter (with embedded PSC).
The VMware vCenter Single Sign-On lockout policy determines when your password expires. By default, VMware vCenter Single
Sign On user passwords expire after 90 days, but administrator passwords do not expire. VMware vCenter Single Sign On
management interfaces show a warning when your password is about to expire.
If a password is expired, the local domain administrator (administrator@vsphere.local by default) can reset the
password using the dir-cli password reset command. Only members of the administrator group for the VMware
vCenter Single Sign On domain can reset passwords.
146
PowerFlex rack password management
Steps
1. From a web browser, connect to the Platform Services Controller by entering the following URL: https://
psc_hostname_or_IP/psc.
In an embedded deployment, the Platform Services Controller hostname or IP address is the same as the VMware vCenter
Server hostname or IP address.
2. Enter the username and password for administrator@vsphere.local (or another member of the VMware vCenter
Single Sign On administrators group.
If you specified a different domain during installation, log in as administrator@mydomain.
3. In the upper navigation pane, click your username to access the menu.
You can also select Single Sign On > Users and Groups and right-click to select Edit User.
4. Select Change Password and enter your current password.
5. Enter a new password and confirm it. The password must conform to the password policy.
6. Click OK.
Changing VM operating system administrative passwords
Use this procedure to change the virtual machine operating system server administrator password in Windows 2008 R2 and
Windows 2012.
Changing the server administrator password in Windows 2008 R2
Use this procedure to change the server administrator password in a Windows 2008 R2 environment.
Steps
1. Log in to the server using the Administrator account.
2. From the Start menu, select Control Panel > User Accounts > User Accounts.
3. Under Make changes to your user account, select Change your password.
4. Type your password in Current password.
5. In New password, type a new password.
6. Retype the password in Confirm new password.
7. In Type a password hint, provide a word or phrase to remind you of your password. This is optional.
8. Click Change password.
Changing the server administrator password in Windows 2012
Use this procedure to change the server administrator password in a Windows 2012 environment.
Steps
1. Log in to the server using Remote Desktop.
2. Press the Windows key and type Administrative tools.
3. Double-click Computer Management.
4. Expand Local Users and Groups. Select Users.
5. Right-click Administrator and choose Set Password.
6. Click Proceed.
7. Enter and confirm the new password.
8. Click OK.
PowerFlex rack password management
147
Changing the administrator password for VMware vCenter Server Appliance
Use this procedure to change the VMware vCenter Server Appliance administrator password.
Steps
1. Log in to the VMware vCenter Server Appliance Web console.
2. On the Admin tab, enter your current password in the Current administrator password box.
3. Enter the new password in the New administrator password and Retype new administrator password boxes.
4. Click Change password.
Resetting the VMware vCenter Server Appliance root password
Use this procedure to reset a forgotten vCenter Server Appliance root password.
Steps
1. Take a snapshot or backup of the vCenter Server Appliance. Do not skip this step.
2. Reboot the vCenter Server Appliance.
3. After the operating system starts, press the E key to access the GNU GRUB Edit Menu.
4. Locate the line that starts with the word Linux. Append the following entries to the end of the line:
rw init=/bin/bash
5. Press F10.
6. At the command prompt, type the following command: passwd
7. Enter a new root password and re-enter the password to confirm it.
8. Unmount the file system by running the following command: umount /
9. Reboot the vCenter Server Appliance by running the following command: reboot -f
10. Confirm that you can access the vCenter Server Appliance using the new root password.
11. Remove the snapshot taken in step 1 if applicable.
Changing CloudLink passwords
CloudLink software is installed on the CloudLink Center VMs.
Unlock secadmin user password
Use this procedure to unlock the secadmin password.
Steps
1. Log in to the controller vCenter web client, and launch the CloudLink Center VM console.
NOTE: For a single vCenter environment, log in to single vCenter with a controller and customer datacenter. Go to the
controller datacenter to launch the CloudLink center VM console.
2. Log in using the CloudLink user credentials.
3. In the Update Menu dialog box, select Unlock User, and click OK.
148
PowerFlex rack password management
A user unlocked message is displayed.
4. Click OK.
Changing the PowerFlex management controller password
Use the iDRAC Web console to change the root password.
Steps
1. Open a web browser and type: https://<ip_address_of_iDRAC>.
2. Log in as root.
3. Expand iDRAC Settings.
4. Select Users.
5. Select User ID 2 and select Edit.
ID 2 is the root user.
6. From the Edit User window:
a. In the Password field, type the new password.
b. In the Confirm Password field, re-type the new password.
c. Click Save.
Managing embedded operating system users and password
Adding users
Steps
1. If you are signed in as the root user, you can create a user at any time by typing: adduser username.
2. If you are a sudo user, add a new user by typing: sudo adduser username.
3. Give your user a password so that they can log in, type: passwd username.
NOTE: If you are signed in nonroot user with sudo privileges, add sudo ahead of the command.
PowerFlex rack password management
149
4. Type in the password twice to confirm it.
The user is set up and ready for use
Granting sudo privileges to a user
If the new user should have the ability to run commands with root (administrative) privileges, you must give the new user
access to sudo.
Steps
To get sudo privileges the user is added to the wheel group (which gives sudo access to all its members by default) using
gpasswd.
If you are logged in as ...
Type the following:
root user
gpasswd -a username wheel
nonroot user with sudo privileges
sudo gpasswd -a username wheel
Now the new user can run commands with administrative privileges, type sudo ahead of the command that you want to run as
an administrator:
sudo some_command
You are prompted to enter the password of the regular user account that you are signed in as. Once the correct password has
been submitted, the command you entered is performed with root privileges.
Managing users with sudo privileges
About this task
While you can add and remove users from a group (such as wheel) with gpasswd, the command doesn't have a way to show
which users are members of a group. In order to see which users are part of the wheel group (and thus have sudo privileges by
default), you can use the lid function. lid is normally used to show which groups a user belongs to, but with the -g flag, you can
reverse it and show which users belong in a group using the following sudo lid -g wheel. The output will show you the
usernames and unique identifiers UIDs that are associated with the group. This is a good way of confirming that your previous
commands were successful, and that the user has the privileges that they need.
Deleting users
The choice of deletion method depends on if you are deleting the user and user files or the user account only.
Steps
1. SSH to the server and log in as root.
2. In the command prompt, choose either of the following:
If you want to delete the user
type the following:
without deleting any of their files
userdel username
home directory along with the user account itself
userdel -r username
NOTE: Add sudo ahead of the command if you are signed in as a nonroot user with sudo privileges.
With either command, the user is automatically removed from any groups that they were added to. This includes the
wheel group if they were given sudo privileges. If you later add another user with the same name, they have to be added
to the wheel group again to gain sudo access.
150
PowerFlex rack password management
Managing SUSE users and passwords
Creating users
About this task
useradd allows you to add users and specify certain criteria such as: comments, the users home directory, shell type, and
many others account properties for SUSE Linux operating system.
Steps
1. SSH to the server, and type: Server1:~# useradd -m -c "<test username>" -s /bin/bash <test>.
Where <test> is a shell type of bash.
The following table that explains what each qualifier is used for:
Qualifier
Description
-m
This qualifier makes the useradd command create the users home directory.
-c "test username"
This qualifier specifies a comment about the user.
-s /bin/bash
This qualifier specifies which shell the user should use.
test
The final qualifier is the username of the user.
2. Set the associated password, type: server1:~ # passwd <test>.
server1:~ # passwd <test>
Changing password for <test>.
New Password:
Reenter New Password:
Password changed.
Once the password is set, the user can successfully log in to the server.
Deleting users
The command to delete users is userdel and is specified with the -r qualifier which removes the home directory and mail
spool.
Steps
SSH to the server and type:server1:~ # userdel -r <test>
Once you have issued the userdel command, you will notice that the /home/<test> directory is removed. If you only want
to delete the user but leave their home directory intact, you can issue the same command but without the -r qualifier.
Enabling sudo on a user
Steps
1. SSH to the server and log in as root.
2. Type the following: sudo usermod -a -G wheel USERNAME
PowerFlex rack password management
151
Managing Red Hat Enterprise Linux users and passwords
Add a new Red Hat Enterprise Linux user
Use this procedure to add a new Red Hat Enterprise Linux user.
Steps
1. To create a new user, use SSH to connect to the jump server and log in as root and type useradd <options>
username.
Where <options> are command-line options as outlined in the following table:
Option
-c <comment>
<comment> can be replaced with any string. This option is generally used to specify the full
name of a user.
-d home_directory
Home directory to be used instead of default /home/username/.
-e date
Date for the account to be disabled in the format YYYY-MM-DD.
-f days
Number of days after the password expires until the account is disabled. If 0 is specified, the
account is disabled immediately after the password expires. If -1 is specified, the account is not
disabled after the password expires.
-g group_name
Group name or group number for the user's default (primary) group. The group must exist prior
to being specified here.
-G group_list
List of additional (supplementary, other than default) group names or group numbers,
separated by commas, of which the user is a member. The groups must exist prior to being
specified here.
-m
Create the home directory if it does not exist.
-M
Do not create the home directory.
-N
Do not create a user private group for the user.
-p password
The password encrypted with crypt.
-r
Create a system account with a UID less than 1000 and without a home directory.
-s
Log in shell of the user, which defaults to /bin/bash.
-u uid
User ID for the user, which must be unique and greater than 999.
Optionally, you can also set a password aging policy. See Red Hat Enterprise Linux 7 Security Guide.
2. By default, useradd creates a locked user account. To unlock the account, run the following command as root to assign a
password: passwd username.
Enabling sudo on a user
Steps
To enable sudo for your user on Red Hat Enterprise Linux, add your user ID (uid) to the wheel group:
a. SSH to the jump server and log in as root and type running su.
b. Type usermod -aG wheel <your_user_id>.
c. Log out and log in again.
152
PowerFlex rack password management
Changing the IPI appliance password
Use this procedure to change the password for the Intelligent Physical Infrastructure (IPI) appliance.
Steps
1. Open a Web browser and access the following URL: https://< IPI_Appliance_IP_address>
2. Click the Setup tab on the top menu bar.
3. Click Users in the left navigation pane.
4. Set the username, password, and/or access level. You can set a unique username for individuals requiring Web management
access to the appliance unit. There are three user account levels:
● Administrator: Full control of IPI appliance configuration settings.
● Controller: Can view configuration settings.
● Viewer: Can view configuration settings.
User 1/admin is the primary administrator. Do not remove administrator rights from the admin user as it might result in noone
having administrator access. If this occurs, a reset to factory defaults is the only solution.
5. Click Save to confirm changes.
Changing a user account password on the IPI appliance
Use this procedure to change the password for a user account on the Intelligent Physical Infrastructure (IPI) appliance.
Steps
1. Open a web browser and access the following URL: https://<IPI_Appliance_IP_address>
2. From the Setup menu, click Users.
3. Locate the account for which you are changing the password.
4. Enter the new password in the field.
Do not make any changes to user name or security role.
5. Click Save.
PowerFlex rack password management
153
10
Installing and configuring PowerFlex
Manager
If PowerFlex Manager is already installed on the PowerFlex rack and the operating system installation (PXE) network is
configured in the factory, go to Configuring PowerFlex Manager and complete the tasks in that section.
Before you install PowerFlex Manager, ensure that PowerFlex Manager supports the RCM version running on the PowerFlex
rack.
As of PowerFlex Manager version 3.3, OpenManage Enterprise is no longer required to connect to Secure Remote Services. If
you use OpenManage Enterprise for other functionality, note that it is no longer installed and we recommend using PowerFlex
Manager instead. In future upgrades, we will recommend removing this software module.
Installation prerequisites
Before you begin installing PowerFlex Manager, ensure that you know the information that is required to deploy the PowerFlex
Manager virtual appliance. Ensure that the static IP addresses are not being used anywhere else in the environment, and that
the correct networks are defined.
Ensure the following:
● The system meets the specified resource requirements:
PowerFlex Manager
version
vCPU
VRAM
Disk space
3.7 and later
8
32 GB
300 GB
3.2 through to 3.6.1
8
32 GB
200 GB
3.1
4
16 GB
166 GB
● VMware vCenter version (based on RCM) is configured and accessible through the hardware management and hypervisor
management networks.
● The appropriate VMware licenses are deployed.
● The VMware vCenter server and VMware vSphere Client are running.
● The TCP/IP address information is available for the PowerFlex Manager virtual appliance.
● The static IP addresses are not used anywhere else in the environment.
● The management, operating system installation, and OOB management networks are defined in the PowerFlex management
controller.
●
NOTE: You can configure PowerFlex Manager as the DHCP server or PXE server in the Initial Setup wizard when you
first log in to the PowerFlex Manager user interface. If you specified that you want to use an external server in the
Logical Configuration Survey, it is already configured.
● The PowerFlex Manager PXE virtual local area network (VLAN) flex-install-<vlanid> is configured. The default is VLAN 104.
● Install the PowerFlex Gateway VM using PowerFlex Manager.
If you must configure the PXE VLAN, see the Configuring the operating system installation (PXE) network section.
Configuring the operating system installation (PXE)
network
You need to complete the procedures in this section only if the operating system installation (PXE) network was not installed as
part of the logical build process.
154
Installing and configuring PowerFlex Manager
You must configure the operating system installation network if using managed mode or lifecycle mode or to add an existing
service.
Configure the switches
Identify the switches, add the configurations, and verify that all the port channel interfaces are in a forwarding state before
configuring the port groups.
About this task
If possible, use flex-install-104 as the name of the VLAN and port group and use vlan 104 as the VLAN number in the
switches.
Steps
1. Create the PXE VLAN on all the access switches that exist in the environment:
Cisco Nexus switches
Dell EMC PowerSwitch switches
ToR Switch-A#conf t
ToR Switch-A#config t
ToR Switch-A(config )# vlan <vlan-id>
ToR Switch-A(config)#interface
ethernet<slot>/<port>
ToR Switch-A(config-vlan)# name <vlan-name>
ToR Switch-A(config-vlan)# end
ToR Switch-A(config)#no shutdown
TOR switch-A# vlan<vlan-id>
ToR Switch-A(config)#switchport mode trunk
ToR Switch-A(config)#switchport trunk
allowed vlan <vlan id>
If the aggregation switches are installed and connected, create the PXE VLAN on the aggregation switches:
Cisco Nexus switches
Dell EMC PowerSwitch switches
Agg Switch-A#conf t
Agg Switch-A#config t
Agg Switch-A(config )# vlan <vlan-id>
Agg Switch-A(config)#interface
ethernet<slot>/<port>
Agg Switch-A(config-vlan)# name <vlan-name>
Agg Switch-A(config-vlan)# end
Agg Switch-A(config)#no shutdown
Agg Switch-A(config)#switchport access
vlan<vlan id>
ToR Switch-A(config)#switchport mode trunk
ToR Switch-A(config)#switchport trunk
allowed vlan <vlan id>
2. Add the PXE VLAN for each management node on all the access switches:
vPC port channel on Cisco Nexus switches
VLT port channel on Dell EMC PowerSwitch switches
ToR Switch-A#conf t
ToR Switch-A#config t
ToR Switch-A(config)# interface portchannel <vPC peer-link port-channel #>
ToR Switch-A(config)#interface portchannel<channelnumber>
ToR Switch-A(config-if)# switchport trunk
allowed vlan add <vlan-id>
ToR Switch-A(config)#switchport mode trunk
ToR Switch-A(config-if)# end
ToR Switch-A(config)#switchport access vlan
<vlan id>
Installing and configuring PowerFlex Manager
155
vPC port channel on Cisco Nexus switches
VLT port channel on Dell EMC PowerSwitch switches
ToR Switch-A(config)#switchport trunk
allowed vlan <vlan id>
ToR Switch-A(config)#vlt-port-channel
<channel number>
ToR Switch-A(config)#spanning-tree port
type edge
3. If the aggregation switches connect to other access switches in other PowerFlex rack nodes, add the PXE VLAN to the
uplinks between the access and aggregation switches:
ToR Switch-A#conf t
ToR Switch-A(config)# interface port-channel <port-channel # between TOR switch and Agg
switch>
ToR Switch-A(config-if)# switch port trunk allowed vlan add <vlan-id>
ToR Switch-A(config-if)# end
4. Verify that all port channel interfaces are in a forwarding state for the VLAN being added.
NOTE: The following example shows only the server and peer-link port channel configured:
ToR Switch-A#show spanning-tree vlan <vlan-id>
<lines removed>
Interface Role Sts Cost Prio.Nbr Type
-------------------------------------------------------------Po50 Desg FWD 1 128.4145 (vPC peer-link) Network P2p
Po111 Desg FWD 1 128.4206 (vPC) Edge P2p
5. Save the switch configurations:
ToR Switch-A#copy run start
Configure the operating system installation (PXE) port group on
the controller cluster
Add a distributed port group on the management cluster to establish connectivity from the VM to the customer network.
Steps
1. Log in to VMware vCenter for the management cluster using the VMware vSphere Client.
2. Select Networking and select the DVswitch0 VDS for the management cluster.
3. Right-click the DVswitch0 VDS and select Distributed Port Group > New Distributed Port Group.
4. On the New Distributed Port Group page, enter a name for the port group and click Next.
5. On the Configure Settings page, keep the default settings except for the following fields:
156
Field
Value
VLAN TYPE
VLAN
Installing and configuring PowerFlex Manager
Field
Value
VLAN ID
PXE VLAN #
6. On the Ready to Complete page, verify that the information is correct and click Finish.
7. To configure hashing on the newly created distributed port group, right-click the port group and select Edit Settings.
8. In the left pane, select Teaming and failover and from the right pane select Route based on IP hash from the Load
balancing menu. Click OK.
Configure the operating system installation (PXE) port group on
the customer cluster
Add a distributed port group on the PowerFlex rack customer cluster to establish connectivity from the VM to the management
network.
Steps
1. Log in to VMware vCenter for the customer cluster using the VMware vSphere Client.
2. Go to Networking and the select the cust_dvswitch VDS for the customer cluster.
3. Right-click the cust_dvswitch VDS and select Distributed Port Group > New Distributed Port Group.
4. On the New Distributed Port Group page, enter a name for the port group and click Next.
5. On the Configure Settings page, keep the default settings except for the following fields:
Field
Value
VLAN TYPE
VLAN
VLAN ID
PXE VLAN #
6. On the Ready to Complete page, verify that the information is correct and click Finish.
7. To configure hashing on the newly created distributed port group, right-click the port group and select Edit Settings.
8. In the left pane, select Teaming and failover and from the right pane select Route based on IP hash from the Load
balancing menu. Click OK.
Deploy PowerFlex Manager
Perform this procedure to download the PowerFlex Manager Open Virtual Appliance (OVA) and deploy the PowerFlex Manager
virtual appliance.
Prerequisites
Log in to the Dell EMC Download Center, download the PowerFlex Manager OVA file, and save it to a location that is accessible
to the VMware vSphere Client.
Steps
1. Log in to VMware vSphere Client.
2. Right-click Management ESXi host and select Deploy OVF Template.
The Deploy OVF Template wizard displays.
3. On the Select an OVF template page, enter the URL where the OVA is located or select Local file and browse to the
location where the OVA is saved. Click Next.
4. On the Select name and folder page, enter a name for the VM, up to 80 characters and select a data center where the
template will be stored. Click Next.
5. On the Select a compute resource page, select a host where the deployed template runs. Click Next.
6. On the Review details page, verify that the template details are correct. Click Next.
7. On the License agreements page, read the license agreement, and select I accept all License agreements and click
Next.
Installing and configuring PowerFlex Manager
157
8. On the Select storage page, complete the following:
a. Select Thin provision from the Select virtual disk format menu.
NOTE: For PowerFlex management controllers and VM details, see PowerFlex management controller datastore and
virtual machine details for more information.
b. Select a datastore from the Datastore Clusters menu and click Next.
9. On the Select networks page, complete the following:
a. Select a destination network for the ESXi management source network.
b. Select a destination network for the OS Installation source network. The operating system installation network is the
PXE network.
Skip this step for partial network deployments.
c. Select a destination network for the OOB Management source network. The OOB management network is the
dedicated iDRAC network.
d. Click Next.
10. On the Ready to Complete page, review the configuration data and click Finish to deploy the PowerFlex Manager virtual
appliance.
Setting up PowerFlex Manager
Perform the tasks in this section to complete the initial PowerFlex Manager configuration:
● Change the default password
● Configure the networks
● Configure the date and time
● Change the default hostname
Changing the Dell Admin password
After you accept the license agreement, you can log in to the Dell EMC Initial Appliance Configuration UI and change the
default password of the PowerFlex Manager virtual appliance.
Steps
1. Log in to the PowerFlex Manager virtual appliance through the VM console using the following default credentials and then
press Enter:
● Username: delladmin
● Password: delladmin
2. Click Agree to accept the License Agreement and click Submit. After you agree to the License Agreement, you can go
back and review the license agreement at any time by clicking EULA in the Dell EMC Initial Appliance Configuration UI.
3. In the Dell EMC Initial Appliance Configuration UI, click Change Admin Password.
4. Enter the Current Password, New Password, Confirm New Password, and click Change Password.
Configuring the networks
Configure the VMware ESXi management, out-of-band (OOB) management, and operating system installation networks through
the Dell EMC Initial Appliance Configuration UI.
About this task
The networks should be mapped as follows:
Network adapter
Network 2
Network connection
1
VMware ESXi management
ens160
2
OOB management (dedicated iDRAC network)
ens192
158
Installing and configuring PowerFlex Manager
Network adapter
Network 2
Network connection
3
Operating system installation (PXE network)
ens224
Prerequisites
If the networks are not mapped, you can identify the mapping by completing the following steps:
1. Log in to VMware vSphere Client.
2. Right-click the PowerFlex Manager virtual appliance and select Edit Settings.
3. On the Virtual Hardware tab, click the Network adapter menu for the VMware ESXi management, the OOB management,
and the operating system installation networks, and take note of the Port ID and MAC Address values for each network.
4. Power on PowerFlex Manager.
5. Log in to the PowerFlex Manager virtual appliance through the VM console using the following credentials:
● Username: delladmin
● Password: delladmin
6. Click Agree to accept the License Agreement and click Submit.
7. Log out of the Dell EMC Initial Appliance Configuration UI.
8. Enter ifconfig. The network connections display.
9. To identify the network connection mapped to a network, check the MAC address of each of the three network connections
that are displayed against the MAC address of each of the three networks that you noted above in step 3. See the table
above for mapping details.
Steps
1. Enter pfxm_init_shell to restart the Dell EMC Initial Appliance Configuration UI and then click Network
Configuration. If the UI does not display, enter sudo pfxm_init_shell and then enter the username delladmin
and your password.
2. To configure the VMware ESXi management network, complete the following steps:
a.
b.
c.
d.
e.
f.
g.
On the Network Connections page, select ens<network_connection> and click Edit the selected connection.
On the General tab, ensure the Automatically connect to this network when it is available check box is selected.
Click the IPv4 Settings tab and from the Method list, select Manual.
In the Addresses pane, click Add, and enter the Address, Netmask, and Gateway.
Enter the IP addresses in the DNS servers box.
Enter the search domain.
Click Save.
3. To configure the OOB management network, which is the dedicated iDRAC network, complete the following steps:
a.
b.
c.
d.
e.
On the Network Connections page, select ens<network_connection> and click Edit the selected connection.
On the General tab, ensure the Automatically connect to this network when it is available check box is selected.
Click the IPv4 Settings tab and from the Method list, select Manual.
In the Addresses pane, click Add, and enter the Address and Netmask.
Click Save.
4. To configure the operating system installation network, which is the PXE network, complete the following steps:
a. On the Network Connections page, select ens<network_connection> and then click Edit the selected connection
icon.
b. On the General tab, ensure the Automatically connect to this network when it is available check box is selected.
c. Click the IPv4 Settings tab and from the Method list, select Manual.
d. In the Addresses pane, click Add, and enter the Address and Netmask.
e. Click Routes and ensure the Use this connection only for resources on its network check box is selected and select
OK.
f. Click Save.
5. Log out of the PowerFlex Manager virtual appliance.
Next steps
Log in to the PowerFlex Manager UI through your browser using the URL that is displayed in the Dell EMC Initial Appliance
Configuration UI. For example, https://<IP_Address>/ui. Use the admin username and admin password.
If you can successfully log in to the PowerFlex Manager UI, PowerFlex Manager is successfully deployed.
Installing and configuring PowerFlex Manager
159
If you cannot log in to PowerFlex Manager, ensure you are using the correct <IP_Address> by entering ip address in the
command line and searching for the IP address of the PowerFlex Manager virtual appliance. The <IP_Address> should be the
same <IP_Address> that is displayed in the Dell EMC Initial Appliance Configuration UI.
Configure the date and time
Perform this procedure to set the date and time of the PowerFlex Manager virtual appliance in the Dell EMC Initial Appliance
Configuration UI. Changing the time after the license is uploaded might result in features being disabled because the virtual
appliance might fall outside the time range for which the license is valid.
About this task
NOTE: PowerFlex Manager currently does not support the chronyd implementation of NTP. NTP is required for configuring
the date and time.
Steps
1. In the Dell EMC Initial Appliance Configuration UI, click Date / Time properties.
2. Under the Date and Time tab, automatically set the time by selecting the Synchronize date and time over the network
check box. Complete any, or all, of the following steps depending on your requirements:
● Add an NTP server that is configured to the NTP Servers menu.
● Edit the default NTP server (s) from the NTP Servers menu.
● Delete the NTP server (s) from the NTP Servers menu.
3. On the Time Zone tab, select the nearest city to your time zone on the map and select the System clock uses UTC check
box. Click OK.
Change the hostname
Update the hostname from the default setting, using the Dell EMC Initial Appliance Configuration UI.
Steps
1. In the Dell EMC Initial Appliance Configuration UI, click Change Hostname.
The Update Hostname dialog is displayed.
2. Enter the new hostname in the box.
3. Click Update Hostname.
A dialog appears that states the hostname is successfully updated.
4. Reboot the PowerFlex Manager virtual appliance for the changes to take effect.
Enable remote access to the PowerFlex Manager VM
Perform this procedure to enable SSH and access the PowerFlex Manager VM through the command line to complete certain
tasks, for example, administration or system maintenance tasks.
Steps
1. Log in to the PowerFlex Manager appliance console using the delladmin username.
If you do not see the command line prompt, log out of the shell and log back in.
2. Enter the following command:
sudo su You are prompted to enter the delladmin password.
3. Enter the following commands:
systemctl enable sshd
160
Installing and configuring PowerFlex Manager
systemctl start sshd
4. Type exit to log out and return to the delladmin user.
5. Connect to the PowerFlex Manager management IP address with the SSH client to verify that SSH is enabled.
Configuring PowerFlex Manager
Perform these steps to configure PowerFlex Manager. If PowerFlex Manager was installed on the device in the factory, you do
not need to perform these steps.
About this task
For detailed information about how to configure PowerFlex Manager, see the Initial setup and configuration section in the
PowerFlex Manager online help.
Prerequisites
Log in to the Dell EMC Download Center and download the PowerFlex Manager license key file to a local network share.
Have the following information available:
●
●
●
●
●
●
Time zone of the virtual appliance that hosts PowerFlex Manager
IP address or hostname of at least two network time protocol (NTP) servers
IP address or hostname, port, and the credentials of the proxy server (optional)
Networks for PowerFlex Manager to access in the environment (optional)
Unique software ID
Connector settings for the Secure Remote Services gateway
Steps
1. Log in to PowerFlex Manager using the following credentials: username=admin and password=admin.
The Setup wizard prompts you to configure the basic settings that are required to start using PowerFlex Manager.
2. Click Next on the Welcome page. Configure the basic settings by completing the Setup wizard.
a. In the Setup Wizard, click Next on the Welcome page.
b. On the Licensing page, upload the license key file by clicking Choose File. Browse to the local share where you
downloaded the license file. Click Save and Continue to upload the license and continue with the wizard.
NOTE: After uploading the license, you must complete the remaining tasks in the Setup wizard. If you close
the Setup wizard without completing the remaining tasks, go to Settings > Virtual appliance management to
complete the tasks.
c. On the Time Zone and NTP Settings page, it shows the current settings in the console. Click Save and Continue.
NOTE: If the NTP details are not displayed in the console, configure them from the Time Zone and NTP Settings
page. Configure the time zone and add the NTP server information. Click Save and Continue.
d. On the Proxy Settings page, if using a proxy server, select the check box and enter configuration details. Click Save
and Continue.
e. On the DHCP Settings page, to configure PowerFlex Manager as a DHCP or PXE server, select the Enable
DHCP/PXE server check box. Enter the DHCP details and click Save and Continue.
NOTE: Enter the starting IP address in the range, for example 192.168.104.100, and the ending IP address in
the range, for example, 192.168.104.250. The ending IP range can be increased depending on the number of
servers that are being configured.
f. Skip this step. Do not configure alert connector during the Setup wizard. Deploy the system and create the cluster first.
See Configure the alert connector.
g. On the Summary page, verify the settings. Click Finish to complete the initial setup.
3. The Getting Started page displays when you complete the initial setup. The page guides you through the common
configuration that is required to prepare a new PowerFlex Manager environment. A green check mark on a step indicates
that you have completed the step. First, click Configure Compliance under Step 1: Firmware and Software Compliance
to provide the RCM location and authentication information for use within PowerFlex Manager.
Installing and configuring PowerFlex Manager
161
Click Configure Compliance under Step 1: Firmware and Software Compliance. For PowerFlex Manager version 3.8,
RCMs are digitally signed. For any previous version, the RCMs appear in a Needs Approval state, and you must approve
the RCM. Choose the Download from local network path option to download the compliance file from an NFS or CIFS file
share and provide the authentication information of the jump server from which the compliance file is shared. FTP paths are
also supported.
For example, FTP path: ftp://<IP Address>/<File Path>
For example, CIFS share path: \\<IP Address>\D$\<File Path>
For a CIFS share path, you must enter the NETBIOS domain before the username (for example, CORP\username) when you
provide the credentials.
A message displays when the RCM upgrade starts. You can close this window. The upgrade runs in the background.
4. Click Define Networks under Step 2: Networks to enter detailed information about the available networks in the
environment. The networks that are defined in PowerFlex Manager are used in templates to specify the networks or VLANs
that are configured on nodes and switches for services.
NOTE: Before proceeding with the deployment of a node using PowerFlex Manager, ensure the network and VLANs are
configured on the Networks page of the PowerFlex Manager.
The following table outlines the purpose of each network:
Network
Purpose
General-purpose LAN
To allow for a takeover of the PowerFlex management controller 2.0, the following
VLANs must be configured as general-purpose LANs in PowerFlex Manager
101,103,151,152,153,154.
NOTE: These networks must be configured as their default network types.
162
Hypervisor management
Use to identify the management network for a hypervisor or operating system
deployed on a node.
Hypervisor migration
Use to manage the network that you want to use for live migration. Live migration
allows you to move running virtual machines from one node of the failover cluster
to different node in the same cluster.
Operating system installation
Allows static or DHCP network for operating system imaging on nodes.
Hardware management
Use for out-of-band management of hardware infrastructure.
PowerFlex Data
Use for traffic between PowerFlex data clients (SDC) and data servers (SDS).
Used for all node types.
PowerFlex data (SDC traffic only)
Use for storage data client traffic only.
PowerFlex data (SDS traffic only)
Use for storage data server traffic only.
PowerFlex management controller 2.0
PowerFlex management
Use for PowerFlex management controller 2.0 PowerFlex management traffic.
PowerFlex management controller 2.0
hypervisor migration
Use to manage the network that you want to use for live migration. Live migration
allows you to move running VMs from one node of the failover cluster to different
node in the same cluster.
PowerFlex management controller 2.0
PowerFlex data 1
Use for traffic between SDC and SDS.
PowerFlex management controller 2.0
PowerFlex data 2
Use for traffic between SDC and SDS.
PowerFlex replication
Use to support PowerFlex replication.
PowerFlex management
Use for PowerFlex system management.
Installing and configuring PowerFlex Manager
Next steps
Perform the Discover resources procedure.
Discover resources
Perform this step to discover and grant PowerFlex Manager access to resources in the environment. Provide the management
IP address and credential for each discoverable resource.
About this task
Dell Technologies recommends using separate operating system credentials for SVM and VMware ESXi. For information about
creating or updating credentials in PowerFlex Manager, click Settings > Credentials Management and access the online help.
During node discovery, you can configure iDRAC nodes to automatically send alerts to PowerFlex Manager. If the nodes are not
configured for alert connector, Secure Remote Services does not receive critical or error alerts for those resources.
The following table describes how to configure resources:
Resource type
Resource state
Example
Element manager
Managed
Panduit IPI cabinet IP address
CloudLink Center IP address
PowerFlex controller nodes
Unmanaged
PowerEdge iDRAC management IP address
NOTE: PowerFlex controller nodes are managed manually
outside of PowerFlex Manager
PowerFlex nodes
Managed
PowerEdge iDRAC management IP address
If you want to perform firmware updates or deployments on a
discovered node, change the default state to Managed.
NOTE: Perform firmware or RCM updates from the
Services page, not the Resources page.
Management switch
Unmanaged
Switch management IP address
NOTE: Management switches are managed manually
outside of PowerFlex Manager.
Access switch
Managed
Switch management IP address
Management controller
vCenter or single vCenter
Managed
Controller vCenter IP address
vCenter
Managed
Customer vCenter IP address
PowerFlex Gateway
Managed
PowerFlex Gateway IP address
Prerequisites
Gather the IP addresses and credentials that are associated with the resources.
Steps
1. On the PowerFlex Manager Getting Started page, click Discover Resources under Step 3: Discover.
2. On the Welcome page of the Discovery wizard, read the instructions and click Next.
3. On the Identify Resources page, click Add Resource Type. From the Resource Type list, select the resource that you
want to discover.
4. Enter the management IP address of the resource in the IP/Hostname Range field. To discover a resource in an IP range,
provide a starting and ending IP address. To discover a resource in a hostname, provide a starting and ending number for the
hostname range.
5. In the Resource State list, select Managed or Unmanaged. Resource state must be managed for PowerFlex Manager to
send alerts to Secure Remote Services.
Installing and configuring PowerFlex Manager
163
6. To discover resources into a selected node pool instead of the global pool (default), select the node pool from the Discover
into Node Pool list.
7. Select the appropriate credential from the Credentials list.
8. If you want PowerFlex Manager to automatically reconfigure the iDRAC nodes it finds, select the Reconfigure discovered
nodes with new management IP and credentials checkbox. This option is not selected by default, because it is faster to
discover the nodes if you bypass the reconfiguration.
9. Select the Auto configure nodes to send alerts to PowerFlex Manager checkbox to have PowerFlex Manager
automatically configure iDRAC nodes to send alerts to PowerFlex Manager.
10. Click Next to start discovery. On the Discovered Resources page, select the resources from which you want to collect
inventory data and click Finish. The discovered resources are listed on the Resources page.
Discover an existing cluster
Perform this procedure to discover existing VMware clusters that are already deployed in the environment and manage them
within PowerFlex Manager.
About this task
For detailed information, see the PowerFlex Manager online help.
Prerequisites
You must have the VMware vCenter, PowerFlex Gateway, CloudLink Center, and hosts discovered in the resource list. The
PowerFlex Gateway must be in the service because PowerFlex Manager puts the gateway in maintenance mode.
Lifecycle mode allows you to add a service without switches. Switches cannot be discovered or managed because they are not
identified.
You can add a service to PowerFlex Manager without discovering switches. Lifecycle mode allows lifecycle management of node
components, even when a supported switch cannot be discovered or managed.
Steps
1. If applicable for the environment, click Add Existing Service under Step 4: Add Existing Service.
2. Click Next on the Add Existing Service wizard Welcome page.
3. Enter a service name in the Name field and provide a description.
4. Select the Type for the service.
5. Specify the version to use for compliance by selecting it from the Firmware and Software Compliance list or choose Use
PowerFlex Manager appliance default catalog.
6. Specify the service permissions and click Next.
7. Verify that the network automation type is set to Full Network Automation and click Next.
NOTE: Do not select Partial Network Automation unless instructed to do so by Dell Technologies Support. This
option is only for special use cases.
8. Provide information on the Cluster Information page:
a. Enter a name for the cluster component.
b. Select values for the cluster settings. For OS Image, you can select the image or choose to use the image provided with
the target compliance version.
c. Click Next.
9. On the OS Credentials page, select the OS credential to use for each node and SVM. You can select one credential
for all nodes (or SVMs) or choose credentials for each item separately. Create the operating system credentials on the
Credentials Management page under Settings. Click Next.
For non-root (sudo) credential users, add the non-root user at the operating system level. See Managing embedded
operating system users and passwords for more information.
10. Review the inventory on the Inventory Summary page and click Next.
11. On the Network Mapping page, review the networks that are mapped to port groups and make any required edits.
12. To import a large number of general purpose VLANS from VMware center, perform these steps:
a. Click Import Networks on the Network Mapping page.
164
Installing and configuring PowerFlex Manager
b. In the Import Networks wizard, click each network that you want to add under Available Networks. To add all
networks, click the check box to the left of the Name column.
c. Click the double arrow to move the networks to Selected Networks. Click Save.
13. Click Next.
14. Review the Summary page and click Finish.
NOTE: PowerFlex management controller 2.0 will be in lifecycle mode after the service takeover.
Configure the operating system installation (PXE) network in
PowerFlex Manager
Configure the operating system installation (PXE) network to enable PowerFlex Manager to automatically configure nodes that
are connected to the network.
Steps
1. Log in to PowerFlex Manager.
2. Click Settings > Networks.
3. Click Define.
4. Enter a network name in the Name field.
5. From the Network Type menu, select OS Installation.
6. Enter the VLAN ID.
7. Click Save.
Clone a template
Perform this procedure to create a template with requirements to follow during deployment. For most environments, you can
clone one of the sample templates that are provided with PowerFlex Manager and modify as needed.
About this task
Choose the sample template that is most appropriate for the environment. For example, for a hyperconverged deployment,
clone the Hyperconverged Nodes template. For a two-layer deployment, clone the Compute Only - ESXi template and then
clone one of the storage templates. If deploying a PowerFlex storage-only node, PowerFlex compute-only node, and PowerFlex
hyperconverged node, you must create or clone three templates.
PowerFlex Manager can deploy a presentation server, CloudLink Center, and PowerFlex Gateway. Clone the appropriate sample
template to deploy these:
● For presentation server, clone the Management - Presentation Server template.
● For CloudLink Center, clone the Management - CloudLink Center template.
● For PowerFlex Gateway, clone the Management - PowerFlex Gateway template.
Keep the following considerations in mind:
● The template that you first deploy depends on whether you want primary MDMs on storage-only or hyperconverged nodes.
● If deploying storage-only and compute-only nodes, deploy the storage-only template first.
Steps
1. On the PowerFlex Manager menu bar, click Templates, and click Add a Template.
2. In the Add a Template wizard, click Clone an existing PowerFlex Manager template.
3. For Category, select Sample Templates and select the Template to be Cloned. Click Next.
4. On the Template Information page, provide the following information:
a. Enter a Template Name.
b. From the Template Category list, select a category. To create a category, select Create New Category from the list.
c. Enter a Template Description (optional).
Installing and configuring PowerFlex Manager
165
d. Specify the version to use for compliance by selecting it from the Firmware and Software Compliance list or select
Use PowerFlex Manager appliance default catalog.
e. Specify the service permissions for the template under Who should have access to the service deployed from this
template? by performing one of the following actions:
● Restrict access to Only PowerFlex Manager Administrators.
● Grant access to PowerFlex Manager Administrators and Specific Standard and Operator Users. Click Add
User(s) to add standard users to the list.
● Grant access to PowerFlex Manager Administrators and All Standard Users.
5. Click Next.
6. On the Additional Settings page, provide new values for the Network Settings, OS Settings, Cluster Settings,
PowerFlex Gateway Settings, and Node Pool Settings.
7. Click Finish.
8. To use CloudLink encryption for a deployment, perform the following steps when PowerFlex hyperconverged nodes and
PowerFlex storage-only nodes templates are used to clone:
NOTE: Encryption is available for PowerFlex hyperconverged nodes, PowerFlex storage-only nodes, and CloudLink
deployment. Ensure the CloudLink Center VM is deployed and available on the Resources page. Deploy CloudLink
Center if it is not deployed.
a. In the template, under Node settings, select Enable Encryption.
● To enable CloudLink software encryption (SSD/NVMe) on non-self-encrypting drives, select Software Encryption.
● To enable hardware encryption on SED drives (SSD drives that are encryption capable), select Self Encrypting
Drive (SED).
b. To specify the type of encryption used when encryption is enabled, select the Drive Encryption Type. Select either
Software Encryption or Self Encrypting Drive (SED).
c. Under PowerFlex Cluster settings, select CloudLink Center.
9. In a dual network environment, perform the following steps:
a. Edit nodes and select the number of instances to deploy per service and leave the rest at default settings and click
Continue.
b. Set all values for operating system, hardware, and BIOS.
c. Under Network Settings, create three interfaces using the following:
NOTE: All the sample templates are predefined with the required network interfaces and VLANs based on the type
of the template used.
Interface 1
Fabric type
Ethernet (NIC/CNA)
Port layout
Two port, 10 GB and two port, 1 GB
Redundancy
Enabled
Port 1/Port 2
flex-install-<vlanid>, flex-stor-mgmt-<vlanid>, flex-nodemgmt-<vlanid>, flex-vmotion-<vlanid>
Interface 2
Fabric type
Ethernet (NIC/CNA)
Port layout
Two port, 25 GB
Redundancy
Disabled
Port 1
Unconfigured
Port 2
flex-data2-<vlanid>, flex-data4-<vlanid>
Interface 3
166
Fabric type
Ethernet (NIC/CNA)
Port layout
Two port, 25 GB
Installing and configuring PowerFlex Manager
Interface 3
d.
e.
f.
g.
Redundancy
Disabled
Port 1
Unconfigured
Port 2
flex-data1-<vlanid>, flex-data3-<vlanid>
Click Validate Settings and verify that the new nodes are listed. Click Close and Save.
Edit and set up the VMware settings. Select the target customer VMware vCenter, data center, and cluster name.
Click Save.
Edit and set up the PowerFlex cluster settings and select the appropriate PowerFlex Gateway.
10. To use external SDC to SDS communication, perform the following steps:
a. In the template under Network Settings, select the required VLANs to enable external SDC communication on SDS data
interfaces.
b. Click Enabled under Static Routes.
c. Click Add New Static Route.
d. Select the source and destination networks from the menu and enter the gateway IP address of the SDS data network
VLAN. Repeat this for each data VLAN.
11. To use compression for a deployment, in the template under Node Settings, select Enable Compression.
12. To use replication for a deployment, perform the following steps:
a. In the template under Node Settings, select Enable Replication.
b. Select the following VLANs depending on the node type:
PowerFlex node
Interface
VLANs
PowerFlex storage-only nodes
1
flex-install-<vlanid>
flex-node-mgmt-<vlanid>
flex-data1-<vlanid>
flex-data3-<vlanid>
flex-rep1-<vlanid>
2
flex-data2-<vlanid>
flex-data4-<vlanid>
flex-rep2-<vlanid>
PowerFlex hyperconverged nodes
1
flex-install-<vlanid>
powerflex-vm-workload
flex-vmotion-<vlanid>
flex-oob-mgmt-<vlanid>
flex-node-mgmt-<vlanid>
2
flex-data1-<vlanid>
flex-data2-<vlanid>
flex-data3-<vlanid>
flex-data4-<vlanid>
flex-rep1-<vlanid>
flex-rep2-<vlanid>
c. Enable the static route, select the source and destination, and provide the gateway details for the primary and remote
sites.
13. To use fault sets, perform the following steps:
a. In the template, select PowerFlex Cluster.
Installing and configuring PowerFlex Manager
167
b. Under PowerFlex Settings, select Enable Fault Sets. Specify the number of fault sets. The default is three fault sets,
with a minimum of two nodes in each fault set.
14. Click Finish.
Configuring network settings in PowerFlex Manager templates
This section describes how to configure network settings for each of the logical network configuration designs.
The standard PowerFlex rack logical network configuration is the LACP bonding NIC port design. Depending on when it was
built, the system might use a different design. The following table summarizes the high-level differences between the network
configuration designs. Use this information to ensure that you use the appropriate network logical configuration in PowerFlex
Manager templates.
To use replication, the system must use or be upgraded to the LACP bonding NIC port design.
Design
Nodes
Network speed
(GB)
Number of
storage data
networks
Network traffic
load balancing
Services
LACP bonding NIC
port
PowerFlex R650/
R750/R6525/
R640/R740xd/
R840
25/100
4
LACP
Replication
Static bonding NIC
port
PowerFlex R650/
R750/R6525/
R640/R740xd/
R840
25
2
IP Hash
NA
Non-bonded NIC
port
PowerFlex R630/
R730xd
10
2
IP Hash
NA
Configure LACP bonding NIC port design network settings in PowerFlex
Manager templates
Perform this procedure to configure network settings in PowerFlex Manager for LACP bonding NIC port design. This design
supports up to four storage networks.
About this task
A minimum of two logical data networks are supported. Optionally, you can configure four logical data networks.
NOTE: The number of data networks in the MDM cluster must match the number of data networks in the template.
If the customer chooses two logical data networks in the Logical Configuration Survey, select only two data networks (flexdata1 and flex-data2) during the template creation process.
Steps
1. Clone the template:
a.
b.
c.
d.
e.
On the PowerFlex Manager menu bar, click Templates and click Add a Template.
In the Add a Template wizard, click Clone an existing PowerFlex Manager template.
From the Category list, select Sample Templates.
From the Template to be Cloned list, select the template that matches the service you want to deploy. Click Next.
On the Template Information page, enter the template name. Select Create New Category from the Template
Category list.
f. Enter Prod in the New Category Name and specify the Firmware and Software Compliance and specify the service
permissions for the template under Who should have access to the service deployed from this template. Click
Next.
g. On the Additional Settings page, provide new values for network settings, operating system settings, cluster settings,
PowerFlex gateway settings, and node pool settings.
h. Click Finish. The cloned template is added to the list on the Templates page.
168
Installing and configuring PowerFlex Manager
2. Click the cloned template.
3. Click the node icon and click Edit.
4. Verify the number of nodes to deploy and click Continue.
5. For PowerFlex compute-only nodes, edit the parameters for the network configuration as follows:
a. Under OS Settings > Switch Port Configuration, select Port Channel (LACP enabled).
b. Under Network Settings > Interfaces, enter the following details:
● Interface 1 > Port 1 > Network (VLAN): flex-install-<vlanid>, flex-node-mgmt-<vlanid>, flex-vmotion-<vlanid>
● Interface 1 > Port 2 > Network (VLAN): flex-data1-<vlanid>, flex-data2-<vlanid>, flex-data3-<vlanid>, flex-data4<vlanid>
● Interface 2 > Port 1 > Network (VLAN): flex-install-<vlanid>, flex-node-mgmt-<vlanid>, flex-vmotion-<vlanid>
● Interface 2 > Port 2 > Network (VLAN): flex-data1-<vlanid>, flex-data2-<vlanid>, flex-data3-<vlanid>, flex-data4<vlanid>
NOTE: For Linux compute-only, assign a general-purpose VLAN network on Port 1; do not use flex-node-mgmt<vlanid>, flex-install-<vlanid>.
6. For PowerFlex hyperconverged nodes, edit the parameters for the network configuration as follows:
a. Under OS Settings > Switch Port Configuration, select Port Channel (LACP enabled).
b. Under Network Settings > Interfaces, enter the following details:
● Interface 1 > Port 1 > Network (VLAN): flex-install-<vlanid>, flex-node-mgmt-<vlanid>, flex-stor-mgmt-<vlanid>,
flex-vmotion-<vlanid>
● Interface 1 > Port 2 > Network (VLAN): flex-data1-<vlanid>, flex-data2-<vlanid>, flex-data3-<vlanid>, flex-data4<vlanid>
● Interface 2 > Port 1 > Network (VLAN): flex-install-<vlanid>, flex-node-mgmt-<vlanid>, flex-stor-mgmt-<vlanid>,
flex-vmotion-<vlanid>
● Interface 2 > Port 2 > Network (VLAN): storage network1, storage network2, storage network3, storage network4
7. For PowerFlex storage-only nodes, edit the parameters for the network configuration as follows:
a. Under OS Settings > Switch Port Configuration, select Port Channel (LACP enabled).
b. Under Network Settings > Interfaces, enter the following details:
● Interface 1 > Port 1 > Network (VLAN): flex-install-<vlanid>, flex-stor-mgmt-<vlanid>, flex-data1-<vlanid>, flexdata3-<vlanid>, flex-rep1-<vlanid> (optional)
● Interface 1 > Port 2 > Network (VLAN): flex-data2-<vlanid>, flex-data4-<vlanid>, and flex-rep2-<vlanid>(optional)
● Interface 2 > Port 1 > Network (VLAN): flex-install-<vlanid>, flex-stor-mgmt-<vlanid>, flex-data1-<vlanid>, flexdata3-<vlanid>, flex-rep1-<vlanid> (optional)
● Interface 2 > Port 2 > Network (VLAN): flex-data2-<vlanid>, flex-data4-<vlanid> and flex-rep2-<vlanid> (optional)
Configure static bonding NIC port design network settings in PowerFlex
Manager templates
Perform this procedure to configure network settings in PowerFlex Manager for static bonding NIC port design. This design
supports up to two data networks.
Steps
1. Clone the template:
a.
b.
c.
d.
e.
On the PowerFlex Manager menu bar, click Templates and then click Add a Template.
In the Add a Template wizard, click Clone an existing PowerFlex Manager template.
From the Category list, select Sample Templates.
From the Template to be Cloned list, select the template that matches the service you want to deploy. Click Next.
On the Template Information page, enter the template name. Select Create New Category from the Template
Category list.
f. Enter Prod in the New Category Name and specify the Firmware and Software Compliance and specify the service
permissions for the template under Who should have access to the service deployed from this template. Click
Next.
g. On the Additional Settings page, provide new values for network settings, OS settings, cluster settings, PowerFlex
gateway settings, and node pool settings.
h. Click Finish. The cloned template is added to the list on the Templates page.
Installing and configuring PowerFlex Manager
169
2. Click the cloned template.
3. Click the node icon and click Edit.
4. Verify the number of nodes to deploy and click Continue.
5. For PowerFlex compute-only nodes, edit the parameters for the network configuration as follows:
a. Under OS Settings > Switch Port Configuration, select Port Channel.
b. Under Network Settings > Interfaces, enter the following details:
●
●
●
●
Interface
Interface
Interface
Interface
1 > Port 1 > Network (VLAN): flex-node-mgmt, flex-vmotion, and PXE
1 > Port 2 > Network (VLAN): data network1, data network2
2 > Port 1 > Network (VLAN): flex-node-mgmt, flex-vmotion, and PXE
2 > Port 2 > Network (VLAN): data network1, data network2
6. For PowerFlex hyperconverged nodes, edit the parameters for the network configuration as follows:
a. Under OS Settings > Switch Port Configuration, select Port Channel .
b. Under Network Settings > Interfaces, enter the following details:
●
●
●
●
Interface
Interface
Interface
Interface
1 > Port 1 > Network (VLAN): flex-node-mgmt, flex-stor-mgmt, flex-vmotion, and PXE
1 > Port 2 > Network (VLAN): data network1, data network2
2 > Port 1 > Network (VLAN): flex-node-mgmt, flex-stor-mgmt, flex-vmotion, and PXE
2 > Port 2 > Network (VLAN): data network1, data network2
7. For PowerFlex storage-only nodes, edit the parameters for the network configuration as follows:
a. Under OS Settings > Switch Port Configuration, select Port Channel.
b. Under Network Settings > Interfaces, enter the following details:
●
●
●
●
Interface
Interface
Interface
Interface
1 > Port 1 > Network (VLAN): flex-stor-mgmt, data network1, and PXE
1 > Port 2 > Network (VLAN): data network2
2 > Port 1 > Network (VLAN): flex-mgmt, data network1, and PXE
2 > Port 2 > Network (VLAN): data network2
Configure non-bonded NIC port design network settings in PowerFlex
Manager templates
Perform this procedure to configure network settings in PowerFlex Manager for non-bonded NIC port design. This design
supports up to two data networks.
Steps
1. Clone the template:
a.
b.
c.
d.
e.
On the PowerFlex Manager menu bar, click Templates and then click Add a Template.
In the Add a Template wizard, click Clone an existing PowerFlex Manager template.
From the Category list, select Sample Templates.
From the Template to be Cloned list, select the template that matches the service you want to deploy. Click Next.
On the Template Information page, enter the template name. Select Create New Category from the Template
Category list.
f. Enter Prod in the New Category Name and specify the Firmware and Software Compliance and specify the service
permissions for the template under Who should have access to the service deployed from this template. Click
Next.
g. On the Additional Settings page, provide new values for network settings, OS settings, cluster settings, PowerFlex
gateway settings, and node pool settings.
h. Click Finish. The cloned template is added to the list on the Templates page.
2. Click the cloned template.
3. Click the node icon and click Edit.
4. Verify the number of nodes to deploy and click Continue.
5. For PowerFlex hyperconverged nodes, edit the parameters for the network configuration as follows:
a. Under OS Settings > Switch Port Configuration, select Trunk port.
b. Under Network Settings > Interfaces, enter the following details:
● Interface 1 > Port 1 > Network (VLAN): flex-node-mgmt, flex-stor-mgmt, flex-vmotion, and PXE
170
Installing and configuring PowerFlex Manager
● Interface 1 > Port 2 > Network (VLAN): data network1
● Interface 2 > Port 1 > Network (VLAN): flex-node-mgmt, flex-stor-mgmt, flex-vmotion, and PXE
● Interface 2 > Port 2 > Network (VLAN): data network2
6. For PowerFlex storage-only nodes, edit the parameters for the network configuration as follows:
a. Under OS Settings > Switch Port Configuration, select Trunk port.
b. Under Network Settings > Interfaces, enter the following details:
●
●
●
●
Interface
Interface
Interface
Interface
1 > Port 1 > Network (VLAN): flex-stor-mgmt and PXE
1 > Port 2 > Network (VLAN): data network1
2 > Port 1 > Network (VLAN): flex-stor-mgmt and PXE
2 > Port 2 > Network (VLAN): data network2
Publish a template
After cloning a template, you must publish it to indicate that the template is ready for deployment.
Steps
1. On the Templates page, select a template. In the right pane, click View Details.
2. In the right pane, click Edit.
3. Perform the following steps to add a component type to the template:
a. Click Add Node or Add Cluster.
b.
c.
d.
e.
If you select a template from Sample templates, PowerFlex Manager selects the default number of PowerFlex nodes
for deployment. To add additional PowerFlex nodes, click Add node.
Select the network automation type and click Continue.
If adding a node, in the Number of Instances box, provide the number of component instances to include in the
template.
If adding a cluster, in the Select a Component box, choose the cluster type.
Under Related Components, perform one of the following actions:
● To associate the component with all existing components, click Associate All.
● To associate the component with selected components, click Associate Selected and then select the components
to associate.
Based on the component type, specific required settings and properties appear automatically. You can edit components
as needed.
f. Click Continue to add the component to the template.
Repeat step 3 to add additional components.
4. To use external SDC to SDS communication, complete the following steps:
a. In the template, under Node > Network settings, select the required VLANs to enable external SDC communication on
SDS data interfaces.
b. Under Node > Static routes, select Enabled. Click Add New Static Route.
c. Choose the source and destination VLANs from the menu, and manually enter the gateway IP address of the SDS data
network VLAN. Repeat for all data VLANs.
5. To use native asynchronous replication, do the following:
a. In the template, under Node > OS settings, select Enable replication.
b. In the template, under Node > Network settings, select the required VLANs to enable replication on interface 1, port 1,
and port 2 . Repeat the same on interface 2.
c. Under Node > Static routes , select Enabled. Click Add New Static Route.
d. Choose the source and destination VLANs from the menu, and manually enter the gateway IP address of the source
replication VLAN. Repeat for the second replication VLAN.
6. For Windows Server 2016 or 2019, expand OS Settings:
For Windows PowerFlex compute-only nodes, only firmware upgrades are supported.
● Enter the product key.
● Select the Install Hyper-V checkbox to enable Hyper-V.
Installing and configuring PowerFlex Manager
171
7. When finished adding components, click Save.
8. Click Publish Template.
After publishing a template, you can use the template to deploy a service. For more information, see the PowerFlex Manager
online help.
Add a new compatibility management file
Use this procedure to add a new compatibility management file to PowerFlex Manager to provide valid upgrade paths for the
PowerFlex Manager virtual appliance and RCMs.
About this task
If the file is not uploaded to PowerFlex Manager appliance, upgrading the PowerFlex Manager appliance and service to latest
version will be blocked. The compatibility management file helps bring the system into compliance and provides details about the
supported upgrade paths.
Upload the latest compatibility management file to ensure that PowerFlex Manager has access to the latest upgrade
information.
Steps
1. Log in to PowerFlex Manager.
2. On the menu bar, click Settings and select Virtual Appliance Management.
3. In the Compatibility Management section, click Add/Edit.
4. If you are using Secure Remote Services, click Download from Secure Remote Services (Recommended).
5. Upload the valid compatibility management file from Dell EMC Download Center by downloading it to local or remote server
(FTP/HTTP).
6. Click Upload from Local to use a local file. Then, click Choose File to select the GPG file and click Save.
Deploying the PowerFlex GUI presentation server
You can use a sample template to clone a PowerFlex GUI presentation server and deploy it using the PowerFlex Manager in the
customer cluster.
About this task
NOTE: This procedure is not applicable for PowerFlex management controller 2.0.
Prerequisites
Discover and set the PowerFlex management controller VMware vCenter as Managed in the PowerFlex Manager and select
this VMware vCenter and vSAN datastore for the presentation server template.
Steps
1. Log in to PowerFlex Manager.
2. On the PowerFlex Manager menu bar, click Template > Sample template > Management - presentation server and
click Clone in the right pane.
3. In the Clone Template dialog box, enter a template name under Template Name.
4. Select a template category from the Template Category list. To create a template category, select Create New Category
and enter the Category name.
5. In the Template Description, enter a description for the template.
6. To specify the version to use for compliance, select the version from the Firmware and Software Compliance list or
choose Use PowerFlex Manager appliance default catalog.
You cannot select a minimal compliance version for a template, since it only includes server firmware updates. The
compliance version for a template must include the full set of compliance update capabilities. PowerFlex Manager does
not show any minimal compliance versions in the firmware and software compliance list.
7. Indicate access rights to the service deployed from this template by selecting one of the following options:
172
Installing and configuring PowerFlex Manager
● PowerFlex Manager administrators
● PowerFlex Manager administrators and specific standard and operator users
○ Click Add Users to add one or more standard and or operator users to the list and click Remove Users to remove
users from the list.
● PowerFlex Manager administrators and all standard and operator users
8. Click Next.
9. On the Additional Settings page, provide new values for the Network Settings, PowerFlex Presentation Server
Settings, and Cluster Settings.
Under PowerFlex Presentation Server settings, select the presentation server credential that is created for the
presentation server.
10. Select the PowerFlex management controller VMware vCenter or single vCenter.
11. Click Finish.
12. Once template is created, click Templates, select the PowerFlex presentation server template and click Edit.
13. Edit each component (PowerFlex presentation server and VMware Cluster), select the required field and Save.
14. Select the Publish template and click Deploy.
NOTE: The presentation server is autodiscovered on the Resource page on the successful deployment of the service.
Adjust the logout time in the presentation server
Use this procedure to change the default presentation server timeout values for the source and target in the customer cluster.
Steps
1. Start an SSH session to the presentation server.
2. Type systemctl stop mgmt-server to stop the server.
3. Type mkdir -p /etc/mgmt-server/.config/ to ensure the config directory is created.
4. Type vim /etc/mgmt-server/.config/mgmt-server to create a custom properties file.
5. Add the following: MGMT_SERVER_OPTIONS='tokenLifeSpanMinutes=1500' using the desired number of minutes.
6. Save the file.
7. Type systemctl start mgmt-server to start the server.
Deploy the PowerFlex Gateway
Perform this task to deploy the PowerFlex Gateway.
Prerequisites
Ensure that the management environment has the available resources to deploy the PowerFlex Gateway:
Compliance file version
vCPU
VRAM (GB)
Disk space (GB)
Before 3.0
2
4
16
3.1 and later
2
8
16
About this task
After the gateway deployment completes, if necessary, see the instructions in the knowledge base article to resolve the
following error: The Root Directory Filled up due to large Localhost_access.log files
Knowledge base article 541865
NOTE: PowerFlex management controller 2.0 requires two gateways installed one for management, and one for the
customer. For choosing the appropriate volume and datastores during the PowerFlex Gateway installation, see PowerFlex
management controller datastore and virtual machine details for more information.
Installing and configuring PowerFlex Manager
173
Steps
1. From the PowerFlex Manager menu, click Templates.
2. On the Templates page, click Add a Template.
3. In the Add a Template wizard, click Clone an existing PowerFlex Manager template.
4. For Category, select Sample Templates. For Template to be Cloned, select Management - PowerFlex Gateway. Click
Next.
5. On the Template Information page, provide the template name, template category, template description, firmware and
software compliance, and who should have access to the service deployed from this template. Click Next.
6. On the Additional Settings page, enter new values for the Network Settings, PowerFlex Gateway Settings, and
Cluster Settings.
7. Click Finish.
8. After creating the template, click Templates, select the PowerFlex Gateway template, and click Edit.
9. Edit the PowerFlex Gateway and VMware cluster, select the required field, and click Save.
10. Publish the template and deploy the service.
Deploy the CloudLink Center
Use this procedure to deploy a CloudLink Center. Although PowerFlex Manager supports up to three instances of CloudLink
Center, two are recommended.
Prerequisites
Ensure the following:
● Hypervisor management or PowerFlex management networks are added to PowerFlex Manager
● A VMware vCenter with a valid data center, cluster, network, and datastore is discovered.
Steps
1. From the PowerFlex Manager menu, click Templates > Sample Templates.
2. On the Sample Templates page, click Management - CloudLink Center > Clone.
3. In the Clone Template wizard, do the following:
a. Enter a template name.
b. From the Template Category list, select a template category. To create a category, select Create New Category from
the list.
c. Enter a Template Description (optional).
d. To specify the version to use for compliance, select the version from the Firmware and Software Compliance list or
choose Use PowerFlex Manager appliance default catalog.
e. Specify the service permissions for this template under Who should have access to the service deployed from this
template?:
● To restrict access to administrators, select Only PowerFlex Manager Administrators.
● To grant access to administrators and specific standard users, select PowerFlex Manager Administrators and
Specific Standard and Operator Users and perform the following steps:
i. Click Add User(s) to add one more standard or operator users to the list.
ii. To remove a standard or operator user from the list, select the user and click Remove User(s).
iii. After adding the standard and or operator users, select or clear the check box next to the standard or operator
users to grant or block access to use this template.
● To grant access to administrators and all standard users, select PowerFlex Manager Administrators and All
Standard and Operator Users.
f. Click Next.
4. On the Additional Settings page, do the following:
a. Under Network Settings, select Hypervisor Network.
b. Under OS Settings, select CLC credential or create a credential with root or CloudLink user by clicking +.
c. Under Cloudlink Settings, do the following:
i.
174
Select the secadmin credential from the list or create a secadmin credential by clicking + and do the following:
i. Enter Credential Name
ii. Enter User Name as secadmin
Installing and configuring PowerFlex Manager
iii. Leave the Domain empty.
iv. Enter the password for secadmin in Password and Confirm Password.
v. Select V2 in SNMP Type.
vi. Click Save.
ii. Select a license file from the list based on the types of drives or click + to upload a license through the Add
Software License page.
NOTE: For SSD/NVMe drives, upload a capacity-based license. For SED drives, upload an SED-based license.
d. Under Cluster Settings, select Management vCenter.
e. Click Finish.
5. Select the VMware Cluster and click Edit > Continue.
a. Under Cluster Settings, select Datacenter Name, and then select Cluster Name from the drop-down list.
b. Under vSphere Network Settings, select the hypervisor management port group.
NOTE: Edit the template to use the PowerFlex management port group.
c. Click Save.
NOTE: To deploy a CloudLink Center from PowerFlex Manager, you need a Management vCenter, Datacenter, and
Cluster along with DvSwitch port groups for PowerFlex management or hypervisor management.
6. Select the VM and click Edit > Continue (by default, the number of CloudLink instances is two and PowerFlex Manager
supports up to three instances).
a. Under VM Settings select the Datastore and Network from the drop-down list.
b. Under Cloudlink Settings select the following:
i.
For Host Name Selection, either select Specify At Deployment Time to manually enter at deployment time or
Auto Generate to have PowerFlex Manager generate the name.
ii. Enter the vault passwords.
NOTE: Other details such as operating system credentials, NTP, and secadmin credentials are auto populated.
7. Under Additional Cloudlink Settings, you can choose either or both of the following settings:
● Configure Syslog Forwarding
a. Select the check box to configure syslog forwarding.
b. For Syslog Facility, select the syslog remote server from the list.
● Configure Email Notifications
a. Select the check box to configure email alerts.
b. Specify the IP address of the email server.
c. Specify the port number for the email server. The default port is 25. Enter the port numbers in a comma-separated
list, with values between 1-65535.
d. Specify the email address for the sender.
e. Specify the required username and password.
8. Click Save.
9. Click Publish Template and click Yes to confirm.
10. In the Deploy Service wizard, do the following:
a.
b.
c.
d.
e.
Select the published template from the drop-down list, and enter Service Name and description.
Select who should have the access to the service and click Next.
Provide Hostname and click Next.
Select Deploy Now or Schedule deployment and click Next.
Review the details in Summary page and click Finish.
Create a VM-VM affinity rule for a CloudLink Center deployment
Use this procedure to create a VM-VM affinity rule for a CloudLink Center deployment.
Steps
1. Log in to the VMware vSphere client, and browse to the cluster.
2. Click the Configure tab.
Installing and configuring PowerFlex Manager
175
3. Under Configuration, click VM/Host Rules.
4. Click Add.
5. In the Create VM/Host Rule dialog box, type the rule name as CLC_Cluster_Rule.
6. From the Type list, select Separate Virtual Machines.
7. Click Add.
8. Select the two CloudLink Center VMs to which the rule applies, and click OK.
9. Click OK.
Deploy a service
Use this procedure to deploy a service. You cannot deploy a service using a template that is in draft state. Publish the template
before using it to deploy a service.
Steps
1. On the menu bar, click Services > Deploy New Service.
2. On the Deploy Service page, perform the following steps:
a. From the Select Published Template list, select the template to deploy a service.
b. Enter the Service Name and Service Description that identifies the service.
c. To specify the version to use for compliance, select the version from the Firmware and Software Compliance list or
choose Use PowerFlex Manager appliance default catalog.
PowerFlex Manager checks the VMware vCenter version to determine if it matches the VMware ESXi version for the
selected compliance version. If the VMware ESXi version is greater than the vCenter version, PowerFlex Manager blocks
the service deployment and displays an error. PowerFlex Manager instructs you to upgrade vCenter first, or use a
different compliance version that is compatible with the installed vCenter version.
NOTE: Changing the firmware repository might update the firmware level on nodes for this service. The global
default firmware repository maintains the firmware on the shared devices.
d. Indicate Who should have access to the service deployed from this template by selecting one of the available
options.
For a hyperconverged or PowerFlex storage-only node deployment, if you want to use CloudLink encryption, perform the
following:
a. Verify that CloudLink Center is deployed.
b. In the template, under Node settings, select Enable Encryption (Software Encryption/Self Encrypting Drive).
c. Under PowerFlex Cluster settings, select CloudLink Center.
3. Click Next.
4. On the Deployment Settings page, configure the required settings. You can override many of the settings that are
specified in the template. You must specify other settings that are not part of the template:
If you are deploying a service with CloudLink, ensure that the correct CloudLink Center is displayed under the CloudLink
Center Settings.
a. To configure OS Settings, select an IP Source. To manually enter the IP address, select User Entered IP. From the IP
Source list, select Manual Entry. Then enter the IP address in the Static IP Address box.
b. To configure Hardware Settings, select the node source from the Node Source list.
● If you select Node Pool, you can view all user-defined node pools and the global pool. Standard users can see only
the pools for which they have permission. Select the Retry On Failure option to ensure that PowerFlex Manager
selects another node from the node pool for deployment if any node fails. Each node can be retried up to five times.
● If you select Manual Entry, the Choose Node list is displayed. Select the node for deployment from the list by its
Service Tag.
NOTE: For a fault set-enabled service, you can choose the fault set number or choose PowerFlex Manager
selected fault set.
c. Under PowerFlex Settings, specify the Journal Capacity for a storage-only or hyperconverged service that has
replication enabled in the template. The default journal capacity is 10% of the overall capacity, however you can
customize the capacity according to the requirement.
d. Under PowerFlex Settings, choose one of the following options for PowerFlex MDM Virtual IP Source:
● PowerFlex Manager Selected IP instructs PowerFlex Manager to select the virtual IP addresses.
176
Installing and configuring PowerFlex Manager
● User Entered IP enables you to specify the IP address manually for each PowerFlex data network that is part of the
node definition in the service template.
NOTE: Verify that the correct disk type (NVMe or SSD) is selected. From the Deployment Settings page, select
PowerFlex Setting > Storage Pool disk type. Ensure that you select the correct disk type: (NVMe or SSD).
5. Click Next.
6. On the Schedule Deployment page, select one of the following options and click Next:
● Deploy Now—Select this option to deploy the service immediately.
● Deploy Later—Select this option and enter the date and time to deploy the service.
7. Review the Summary page.
The Summary page gives you a preview of what the service will look like after the deployment.
8. Click Finish when you are ready to begin the deployment. For more information, see PowerFlex Manager online help.
Add volumes to a service
Add volumes to a service and verify that the machines are in a connected state.
About this task
When the hyperconverged deployment is complete, PowerFlex Manager will automatically create two volumes with 16 GB, thin
provisioned with the names powerflex-service-vol-1 and powerflex-service-vol-2.
For a storage-only deployment, the service is incomplete, follow these steps to add the volume.
For a compute-only deployment, the service will be in lifecycle mode as there is no information on the protection domain and
storage. The vCLS VMs must be moved using the migration wizard.
Steps
1. Log in to PowerFlex Manager.
2. From the Services page, click Add Resources and choose Add volumes.
3. From the Add Volume wizard, click Add Existing Volumes or Create New Volumes.
The Add Existing Volumes option is only available for the hyperconverged service.
4. If you selected Add Existing Volumes, select the Volume and add the Datastore Name Template from the Add Existing
Volumes page.
5. To create a new volume for a hyperconverged service:
a. Click Add Volume.
b. From the Volume Name list, select Create New Volume to create a new volume now, or select an Auto-Generate name
when you create multiple volumes.
c. In the New Volume Name box, type a name for the volume.
d. From the Datastore Name list, select Create new datastore to create a new datastore or select an existing datastore.
e.
f.
g.
h.
i.
j.
k.
l.
m.
n.
o.
p.
If you choose a volume that is mapped to a datastore that was created previously in another hyperconverged or
compute-only service, you need to select the same datastore that was associated with the volume in the other service
In the New Datastore Name box, type a name for the datastore.
From the Storage Pool list, select the required storage pool.
In the Volume Size (GB) box, enter the required volume size.
From the Volume Type list, select Thin or Thick.
A thick volume provides a larger amount of storage in advance, and a thin volume provides on-demand storage and faster
setup and startup times.
To use an Auto-Generate name, in the New Volume Name field:
From the Volume Name Template, modify the template based on the volume naming convention.
From the How Many Volumes field, enter the number of volumes that need to be created.
From the Datastore Name Template, modify the template based on the datastore naming convention.
From the Storage Pool list, select the required storage pool.
In the Volume Size (GB) box, enter the required volume size. The minimum size is 8 GB and the value must be divisible
by 8.
From the Volume Type list, select Thin or Thick.
Click Next > Finish.
Installing and configuring PowerFlex Manager
177
6. To create a new volume for a storage-only service:
a.
b.
c.
d.
e.
f.
g.
h.
i.
j.
k.
l.
From the Volume Name list, select Create New Volume.
In the New Volume Name box, type a name for the volume.
From the Storage Pool list, select the required storage pool.
In the Volume Size (GB) box, enter the required volume size.
From the Volume Type list, select Thin or Thick.
A thick volume provides a larger amount of storage in advance, and a thin volume provides on-demand storage and faster
setup and startup times. If you enable compression for the volume, thin is the only option available for Volume Type.
To use an Auto-Generate name, in the New Volume Name field:
From the Volume Name Template, modify the template based on the volume naming convention.
From the How Many Volumes field, enter the number of volumes that need to be created.
From the Storage Pool list, select the required storage pool.
In the Volume Size (GB) box, enter the required volume size. The minimum size is 8 GB and the value must be divisible
by 8.
From the Volume Type list, select Thin or Thick.
Click Next > Finish.
7. To create a new volume for a compute-only service:
a. From the Volume Name list, select an existing volume.
For a compute-only service, you can only select an existing volume that has not yet been mapped.
b. From the Datastore Name list, select Create new datastore or select an existing datastore.
The Datastore Name field is only available for a hyperconverged or compute-only service, as it applies only to services
with ESXi. If the volume was originally created in a storage-only service, you must select Create New Datastore to create
a new datastore. Alternatively, if the volume was originally created in a hyperconverged service, you must select the
datastore that was already mapped to the selected volume in the other service.
c. In the New Datastore Name box, type a name for the datastore.
8. (Optional) Click Add Volume to add another volume and enter the required information.
9. Click Save.
The service moves to the In Progress state and the new volume icons appear on the Service Details page. After the
deployment completes successfully, the new volumes are displayed and indicated by a check mark in the Storage list on the
Service Details page. The PowerFlex 3.0.1.2 and older GUI shows the new volumes under the storage pool. In PowerFlex 3.5,
new volumes are under Configuration > Volumes. For a storage-only service, the volumes are created, but not mapped. For a
compute-only or hyperconverged service, the volumes are mapped to SDCs. In the vSphere client, you can see the volumes
in the storage section and also see the hosts that are mapped to the volumes, once the mappings are in place.
10. After the service is successfully deployed, log in to the CloudLink Center that was discovered in PowerFlex Manager, and
verify that the machines are in connected state, and drives are encrypted.
NOTE: For PowerFlex nodes with SED drives, CloudLink Center displays the statuses Encrypted HW and Managed.
Resize a volume
After adding volumes to a service, you can resize the volumes in managed mode.
About this task
For a storage-only service, you can increase the volume size. For a VMware ESXi compute-only service, you can increase the
size of the datastore that is associated with the volume. For a hyperconverged service, you can increase the size of both the
volume and the datastore.
If you resize a volume in a storage-only service, you must update the datastore size in the corresponding VMware ESXi
compute-only service. The datastore size cannot exceed the size of the volume.
Steps
1. Log in to PowerFlex Manager.
2. On the Services page, click the volume component and choose Volume Actions > Resize.
3. Choose the volume that you want to resize:
a. Click Select Volume.
178
Installing and configuring PowerFlex Manager
b. Enter a volume or datastore name search string in the Search Text box.
c. Optionally, apply additional search criteria by specifying values for the Size, Type, Compression, and Storage Pool
filters.
d. Click Search.
PowerFlex Manager updates the results to show only those volumes that satisfy the search criteria. If the search returns
more than 50 volumes, you must refine the search criteria to return only 50 volumes.
e. Select the row for the volume you want to resize.
f. Click Save.
4. Update the sizing information:
If you are resizing a volume for a hyperconverged service, perform these steps:
a. In the New Volume Size (GB) field, specify a value that is greater than the current volume size.
b. Optionally, select Resize Datastore to increase the size of the datastore.
If you are resizing a volume for a storage-only service, enter a value in the New Volume Size (GB) field. Specify a value
that is greater than the current volume size. Values must be in multiples of eight, or an error occurs.
If you are resizing a volume for a compute-only service, review the Volume Size (GB) field to see if the volume size is
greater than Current Datastore Size (GB). If it is, PowerFlex Manager expands the datastore size.
5. Click Save.
Migrate vCLS VMs for PowerFlex hyperconverged nodes and
PowerFlex compute-only nodes
Use this procedure to migrate the vCLS VMs.
About this task
The VMware vSphere 7.0 update creates vCLS VMs. When the hosts are added to the cluster, these VMs are auto populated.
Each cluster consists of three VMs. No changes should be made on these VMs.
Steps
1. Go to the VMs using the VMs and Templates view or click vCenter Server Extentions > vSphere ESX Agent Manager
> VMs.
2. Click the vCLS folder.
3. After the service is deployed, two dedicated Service volumes (16 GB) are created by PowerFlex Manager and the VMs will
migrate.
The volume name is powerflex-service-vol-1,powerflex-service-vol-2. The datastore name is powerflex-esxclustershotnameds1,powerflex-esxclustershotname-ds2.
4. For PowerFlex compute-only nodes, the VMs must be migrated using the Migrate vCLS VMs tab from the service Action
page.
Two additional volumes and datastores are created and the VMs are migrated. Until then, the service will be Lifecycle
mode.
Provide a PowerFlex Manager license key after initial setup
Perform this task to provide the PowerFlex Manager license key after initial installation. You might need to do this in response to
a license missing error.
Prerequisites
Log in to the Dell EMC Download Center and download the PowerFlex Manager license key file to a local network share.
Steps
1. Log in to PowerFlex Manager.
2. On the menu bar, click Settings and then click Virtual Appliance Management.
Installing and configuring PowerFlex Manager
179
3. On the Virtual Appliance Management page, in the License Management section, click Add.
4. In the Add License window, click Choose File next to Upload License. Locate and select the license file and click Open.
5. Click Save to apply the license.
Configure the alert connector
Configure the alert connector to register the device with Secure Remote Services using a unique software ID. This enables
critical and error alerting for node and PowerFlex resources that are managed by PowerFlex Manager.
About this task
CloudIQ is enabled by default.
NOTE: CloudIQ enables PowerFlex Manager to transport telemetry data alerts and analytics using SRS.
Prerequisites
Before you configure the alert connector, ensure:
● The primary MDM in the PowerFlex cluster is valid and up and running.
● Secure Remote Services gateway is configured in the data center and connected to Secure Remote Services.
Steps
1. Log in to PowerFlex Manager using the following credentials:
● Username: admin
● Password: admin
2. On the menu bar, click Settings and click Virtual Appliance Management.
3. Click Add in the Alert Connector section.
4. Complete the following steps in the Device Registration section:
a. Select the device type.
b. Enter your unique software ID in the Enterprise License Management Systems (ELMS) Software Unique ID box.
For information about how to obtain the ID, consult the License Authorization email that you received.
c. Enter the unique number associated with your system in the Solution Serial Number box, for example, V1234567.
d. Select one or more of the following options for the Connection Type:
● SRS
● Email
e. Optionally, disable CloudIQ integration by deselecting Enable CloudIQ.
f. Select the severity level for which you want to see alerts by choosing one of the following Alert Filter values:
● Critical (Recommended)
● Warning
● Info
g. Specify how often you want to check for alerts by entering an Alert Polling Interval value in hours or minutes.
5. For a Secure Remote Services configuration, complete the following steps in the SRS Section under Connector Settings:
a. Enter a node address for the Secure Remote Services gateway in the SRS Gateway Host IP or FQDN box.
NOTE: Secure Remote Services support recommends using the IP address when registering.
b. Enter the port number in the SRS Gateway Host Port box.
c. Enter the required username in the User ID box.
d. Enter the required password in the Password or NT Token box.
6. For an email configuration, complete the following steps in the Email Server Configuration under Connector Settings:
a. Choose the Server Type.
● SMTP
● SMTPS over SSL
● SMTPS STARTTLS
b. Enter an IP address or fully qualified domain name for the email server in the Server IP or FQDN box.
180
Installing and configuring PowerFlex Manager
c.
d.
e.
f.
g.
Enter the port number for the email server in the Port box.
Optionally enter the username in the User ID box.
Optionally enter the password in the Password box.
Enter the email address for the sender in the Sender Address box.
Enter one or more email recipient addresses.
7. Click Save.
8. To verify that the alert connector is receiving alerts, click Send Test Alert.
9. To verify the connection, click Test Connection.
When the device is registered for alerting, topology and telemetry reports are automatically sent to Secure Remote Services
weekly, starting at the time that the device was registered.
Configuring SNMP trap and syslog forwarding
You can configure PowerFlex Manager for SNMP trap and syslog forwarding.
Configure SNMP communication to enable PowerFlex Manager to receive and forward SNMP traps. PowerFlex Manager can
receive SNMP traps from system devices and forward them to one or more remote network management systems.
You can configure PowerFlex Manager to forward syslogs it receives from system components to a remote network
management system. Authentication is provided by PowerFlex Manager, through the configuration settings you provide.
Configure SNMP trap forwarding
To configure SNMP trap forwarding, specify the access credentials for the SNMP version you are using and then add the
remote server as a trap destination.
About this task
PowerFlex Manager supports different SNMP versions, depending on the communication path and function. The following table
summarizes the functions and supported SNMP versions:
Function
SNMP version
PowerFlex Manager receives traps from all devices, including iDRAC
v2
PowerFlex Manager receives traps from iDRAC devices only
v3
PowerFlex Manager forwards traps to the network management system
v2, v3
NOTE: SNMPv1 is supported wherever SNMPv2 is supported.
PowerFlex Manager can receive an SNMPv2 trap and forward it as an SNMPv3 trap.
SNMP trap forwarding configuration supports multiple forwarding destinations. If you provide more than one destination, all
traps coming from all devices are forwarded to all configured destinations in the appropriate format.
PowerFlex Manager stores up to 5 GB of SNMP alerts. Once this threshold is exceeded, PowerFlex Manager automatically
purges the oldest data to free up space.
For SNMPv2 traps to be sent from a device to PowerFlex Manager, you must provide PowerFlex Manager with the community
strings on which the devices are sending the traps. If during resource discovery you selected to have PowerFlex Manager
automatically configure iDRAC nodes to send alerts to PowerFlex Manager, you must enter the community string used in that
credential here.
For a network management system to receive SNMPv2 traps from PowerFlex Manager, you must provide the community
strings to the network management system. This configuration happens outside of PowerFlex Manager.
For a network management system to receive SNMPv3 traps from PowerFlex Manager, you must provide the PowerFlex
Manager engine ID, user details, and security level to the network management system. This configuration happens outside of
PowerFlex Manager.
Installing and configuring PowerFlex Manager
181
Prerequisites
PowerFlex Manager and the network management system use access credentials with different security levels to establish
two-way communication. Review the access credentials that you need for each supported version of SNMP. Determine the
security level for each access credential and whether the credential supports encryption.
To configure SNMP communication, you need the access credentials and trap targets for SNMP, as shown in the following
table:
If adding...
You must know...
SNMPv2
Community strings by which traps are received and forwarded
SNMPv3
User and security settings
Steps
1. Log in to PowerFlex Manager.
2. On the menu bar, click Settings, and click Virtual Appliance Management.
3. On the Virtual Appliance Management page, in the SNMP Trap Configuration section, click Edit.
4. To configure trap forwarding as SNMPv2, click Add community string. In the Community String box, provide the
community string by which PowerFlex Manager receives traps from devices and by which it forwards traps to destinations.
You can add more than one community string. For example, add more than one if the community string by which PowerFlex
Manager receives traps differs from the community string by which it forwards traps to a remote destination.
NOTE: An SNMPv2 community string that is configured in the credentials during discovery of the iDRAC or through
management is also displayed here. You can create a new community string or use the existing one.
5. To configure trap forwarding as SNMPv3, click Add User. Enter the Username, which identifies the ID where traps are
forwarded on the network management system. The username must be at most 16 characters. Select a Security Level:
Security Level
Details
Description
authPassword
privPassword
Minimal
noAuthNoPriv
No authentication and
no encryption
Not required
Not required
Moderate
authNoPriv
Messages are
authenticated but not
encrypted
Required
Not required
Required
Required
(MD5 at least 8
characters)
Maximum
authPriv
Messages are
authenticated and
encrypted
(MD5 and DES both at
least 8 characters)
Note the current engine ID (automatically populated), username, and security details. Provide this information to the remote
network management system so it can receive traps from PowerFlex Manager.
You can add more than one user.
6. In the Trap Forwarding section, click Add Trap Destination to add the forwarding details.
a. In the Target Address (IP) box, enter the IP address of the network management system to which PowerFlex Manager
forwards SNMP traps.
b. Provide the Port for the network management system destination. The SNMP Trap Port is 162.
c. Select the SNMP Version for which you are providing destination details.
d. In the Community String/User box, enter either the community string or username, depending on whether you are
configuring an SNMPv2 or SNMPv3 destination. For SNMPv2, if there is more than one community string, select the
182
Installing and configuring PowerFlex Manager
appropriate community string for the particular trap destination. For SNMPv3, if there is more than one user-defined,
select the appropriate user for the particular trap destination.
7. Click Save.
The Virtual Appliance Management page displays the configured details as shown below:
Trap Forwarding <destination-ip>(SNMP v2 community string or SNMP v3 user)
NOTE: To configure nodes with PowerFlex Manager SNMP changes, go to Settings > Virtual Appliance
Management, and click Configure nodes for alert connector.
Configure syslog forwarding
You can configure PowerFlex Manager to forward syslogs it receives from system components to a remote network
management system. PowerFlex Manager provides authentication through the configuration settings you provide.
About this task
You can configure PowerFlex Manager to forward syslogs to up to five destination remote servers. You can set only one
forwarding entry per remote server.
You can apply forwarding filters based on facility type and severity level. For example, you can configure PowerFlex Manager to
forward all syslog messages to one remote server and then forward syslog messages of a given severity to a different remote
server. The default is to forward syslog messages of all facilities and severity levels to the remote syslog server.
Prerequisites
Ensure that the system components are configured to send syslog messages to PowerFlex Manager. This configuration happens
outside of PowerFlex Manager.
Ensure that you have the following information:
● Obtain the IP address of the hostname for the remote syslog server and the port where the server is accepting syslog
messages.
● If sending only some syslog messages to a remote server, you must know the facility and severity of the log messages to
forward.
Steps
1. Log in to PowerFlex Manager.
2. On the menu bar, click Settings and click Virtual Appliance Management.
3. On the Virtual Appliance Management page, in the Syslog section, click Edit.
4. Click Add syslog forward.
5. For Host, enter the destination IP address of the remote server to which you want to forward syslogs.
6. Enter the destination Port 514 where the remote server is accepting syslog messages.
7. Select the network Protocol used to transfer the syslog messages. The default is UDP.
8. Optionally enter the Facility and Severity Level to filter the syslogs that are forwarded. The default is to forward all.
9. Click Save to add the syslog forwarding destination.
The Virtual Appliance Management page displays the configured details as shown below:
Syslog Forwarding <destination-ip>(<Facility><Severity Level>)
Deploying Windows-based PowerFlex compute-only nodes with
PowerFlex Manager (bare metal deployment)
You can use PowerFlex Manager to deploy Windows-based PowerFlex compute-only nodes. Dell EMC recommends using the
Windows ISOs that are published by Dell for node deployments performed with PowerFlex Manager. Optionally, you can use
a custom ISO. To deploy a custom Windows image through PowerFlex Manager, you must modify the target ISO to allow
PowerFlex Manager to automate the deployment process.
Installing and configuring PowerFlex Manager
183
NOTE: Windows-based PowerFlex compute-only nodes are only supported for PowerFlex Manager versions prior to 3.7.
PowerFlex Manager can deploy Windows 2016 or 2019 as a bare-metal server or as an SDC node. Two types of installation are
available:
● STANDARDSERVER
● DATACENTERSERVER
For Windows PowerFlex compute-only nodes, only firmware upgrades are supported.
NOTE: Deploy Windows for PowerFlex compute-only nodes using manual procedures for static bonding NIC and LACP
bonding NIC networking. After the deployment is complete, see Mapping a volume to a Windows-based compute-only node
using a PowerFlex version prior to 3.5 to manually map the PowerFlex volume to a Windows-based PowerFlex compute-only
node.
Preparing the build environment
Perform these steps to set up the build environment from which you have to prepare a custom Microsoft Windows ISO.
About this task
For any build environment that you use to prepare custom Microsoft Windows ISOs, it is recommended deploying the same
version of Microsoft Windows as the target ISO. For example, deploy Microsoft Windows 2016 or 2019 to be the build
environment if you plan on preparing a Microsoft Windows 2016 or 2019 ISO for PowerFlex Manager deployment. Be sure to
reserve at least 100 GB of free space.
If dependencies (for example, library files) exist, you can use new and old versions of Microsoft Windows. But generally this is
not recommended because of Microsoft Windows compatibility issues. The published Microsoft Windows ISOs for PowerFlex
Manager uses are all prepared using the same exact version of Microsoft Windows as the build environments.
Prerequisites
Deploy the build environment that you use to prepare custom Microsoft Windows ISOs. The build environment must be a
Windows machine.
Steps
1. After the deployment of the build environment is complete, log in to the Windows machine using the local administrator
account or an account with the administrator privilege.
2. Create a directory to be used as the build environment.
Do not use any system directory as its root directory (for example, C:\Windows, C:\Program Files, C:
\ProgramData). Also, be sure that the full path to the build environment directory does not contain any spaces. Spaces in
the path can cause problems while Deployment Image Servicing and Management (DISM) is processing the images.
This example uses C:\Users\Administrator\Documents\winpe-build-env.
3. Locate the PowerShell script (/opt/razor-server/build-winpe/build-razor-winpe.ps1) on the
PowerFlex Manager appliance. Copy this file to the build directory on the Windows machine (C:
\Users\Administrator\Documents\winpe-build-env).
4. Download Microsoft ADK 8.1. Install the ADK with all its optional components.
The components that are listed should be:
●
●
●
●
●
●
●
Application Compatibility Toolkit (ACT)
Deployment Tools
Windows Preinstallation Environment (Windows PE)
User State Migration Tool (USMT)
Volume Activation Management Tool (VAMT)
Windows Performance Toolkit
Windows Assessment Services
5. After the installation completes successfully, place the Microsoft Windows ISO in the build environment directory (C:
\Users\Administrator\Documents\winpe-build-env).
184
Installing and configuring PowerFlex Manager
6. Find the operating system drivers that are required for the target hardware (for example, drivers for PowerFlex R740xd
node) from the product support page at Dell Technologies Support. Once the drivers are downloaded and extracted, place
them in the drivers directory (C:\Users\Administrator\Documents\winpe-build-env\Drivers).
If the Drivers folder does not exist, you must create it.
Download and install drivers
Perform these steps if you need to download and install drivers for the target Dell server model. Use this procedure if the driver
is not updated or discovered automatically by Windows.
Steps
1. Log in to the Dell EMC Download Center, Click Product Support under the Support tab.
2. Find the target server model by looking up the service tag, product ID, or the model (for example, PowerEdge R740).
3. Click the Drivers & Downloads tab and select Drivers for OS Deployment for the category.
4. Download the Dell OS Driver Pack.
5. Copy the downloaded driver pack to the new Windows host (or download on the host itself).
6. Open the folder where the driver pack is downloaded and execute the file.
Building the Microsoft Windows ISO
Perform these steps to build the Microsoft Windows ISO.
About this task
Once the prerequisite steps have been completed, you can run the PowerShell script to build the Microsoft Windows ISO with
WinPE. This ISO can then be deployed through PowerFlex Manager.
Steps
1. Open the Windows PowerShell Console.
2. Change the working directory to the build environment directory by running the following command:
cd C:\Users\Administrator\Documents\winpe-build-env\
3. Ensure that the target Microsoft Windows ISO is in the build environment directory (C:
\Users\Administrator\Documents\winpe-build-env\) and the required drivers are in the drivers directory (C:
\Users\Administrator\Documents\winpe-build-env\Drivers).
4. Type powershell -executionpolicy bypass -file build-razor-winpe.ps1 DHCP <TARGET_ISO_NAME>
<NEW_ISO_NAME>, where:
<TARGET_ISO_NAME> is the name of the Microsoft Windows ISO to be used as the base and <NEW_ISO_NAME> is
the name of the new Microsoft Windows ISO to be created with WinPE configurations. These names must include the file
extension.
For example:
powershell -executionpolicy bypass -file build-razor-winpe.ps1 DHCP
default_windows2019.iso vxfm_windows2019.iso
Once the script starts running, the console displays the standard outputs and error outputs. Error outputs may not indicate
a significant problem. For example, if the Drivers directory includes files that are not supported by the target operating
system and its version, the script may display errors.
When the script completes successfully, the directory should have a new ISO file with WinPE configurations that are
embedded for PowerFlex Manager deployment use cases.
5. If the script process fails to create the ISO, you must perform some manual cleanup steps before retrying the build.
a. Type dism /cleanup-wim to clean up the mounted images in the DISM:
You may have to run this command multiple times until the scan result shows that all images are cleaned up.
Installing and configuring PowerFlex Manager
185
b. Delete the subfolders in the build environment directory, except for the Drivers folder.
Configuring PowerFlex Manager to use the Microsoft Windows image
After building the Microsoft Windows ISO, you need to specify the location of the operating system image install file in the
template you want to deploy in PowerFlex Manager.
Prerequisites
Build the Microsoft Windows ISO.
Steps
1. On the Settings page, add the OS image repository.
a. On the PowerFlex Manager menu bar, click Settings.
b. On the Compliance and OS Repositories page, click OS Image Repositories tab, and then click Add.
c. In the Add OS Image Repository dialog box, enter the following:
i.
In the Repository Name box, enter the name of the repository. The repository name must be unique and case
insensitive.
ii. In the Image Type box, enter the image type.
iii. In the Source Path and Filename box, enter the path of the OS Image file name in a file share.
To enter the CIFS share, see the format used in the following example: \\host\lab\isos\filename.iso
To enter the NFS share, see the format used in this following example: Host:/var/nfs/filename.iso
iv. If you are using the CIFS share, enter the User Name and Password to access the share.
d. Click Add.
2. On the PowerFlex Manager menu bar, click Templates and then open the template you want to deploy as a service.
3. Click Edit.
4. Select the node component and click Edit.
5. Click Continue.
6. Under OS Settings, specify the name of the OS image repository you added in the OS Image field.
Managing users
The Users page allows you to manage the users within PowerFlex Manager. You can create a user, or edit, delete, enable,
disable, or import existing users from Active Directory.
The Users page displays the following user information:
● User name
● Domain
● Role
● Last name
● First name
● State (enabled or disabled)
On
●
●
●
●
●
this page, you can:
Click the refresh icon on the upper left of the Users tab to retrieve the newly added users.
Edit or delete an existing user.
Create local user.
Enable or disable a user account.
Import Active Directory users.
You can click a specific user account to view the following user-related information: email, phone, and directory services.
You can also refresh the information on the page. To sort the users list based on the entries in a column, click the arrow next
the column header.
186
Installing and configuring PowerFlex Manager
User roles
Every PowerFlex Manager user account can be assigned to any of the following roles:
● Administrator—Users with the administrator role can view all pages and perform all operations in PowerFlex Manager and
grant permission to standard users to perform certain operations.
● Standard—Users with the standard role can view certain pages and perform certain operations which are based on the
permission that is granted by an administrator. Also, standard users can grant permission to other users to view and perform
certain operations that they own.
● Operator—Users with the operator role can view certain pages and perform certain operations which are based on
the permission that is granted by an administrator. The primary operation that is performed by operator users is drive
replacement.
● Read-only—Users with the read-only role can view all operations but are not allowed to perform any operations. When a
user logs in as a read-only user, PowerFlex Manager does not allow the user to perform any operations.
The following table describes the privileges or permissions that are associated with each role:
Feature
Permission
Administrator
Standard
Dashboard
View
Yes
Owner and or
Yes
participant
NOTE:
Standard users
who are
granted
permission or
are owners,
can view the
dashboard
data and links
to services,
templates,
resource
usage, and
resource pools.
However, the
data is filtered
by services,
resource usage
and pools,
recent
templates, and
any recent
activity
performed by
the user as an
owner or
participant.
Yes
Read
Yes
Owner and or
participant
Yes
Yes
Link to other
pages
Yes
Owner and or
participant
Yes
Yes
View
Yes
Owner and or
participant
Yes
Services
Operator
Read-only
NOTE: Direct
links to deploy
a service from
recent
templates is
disabled.
Yes
Installing and configuring PowerFlex Manager
187
Feature
Permission
Administrator
Standard
Operator
Read-only
No
NOTE: Users
can view only
services that
they own or
are granted
permission to.
Service Details
188
Deploy a service
Yes
Owner and or
participant
No
Export to file
Yes
Owner and or
participant
Participant
No
NOTE: Users can
only view a service
that they are
granted permission
to or have been
make a participant.
Add an existing
service
Yes
Owner and or
participant
No
View
Yes
Owner and or
Participant
participant
NOTE: On the
Service page,
users can view
service details
and perform
actions only
for services
which they are
owners or are
granted
permission to.
Users with this
role cannot
perform any
firmware
action.
Yes
Open device
console
Yes
Owner and or
participant
No
No
Edit service
information
Yes
Owner
No
No
Delete a service
Yes
Owner
No
No
Cancel a service
Yes
Owner
No
No
Retry
Yes
Owner
No
No
Remove a service
Yes
Owner
No
No
View all settings
Yes
Owner and or
participant
Participant
Yes
NOTE: Users can
only view a service
that they are
granted permission
to or have been
make a participant.
Installing and configuring PowerFlex Manager
No
Feature
Templates
Template Edit
Template Details
Permission
Administrator
Standard
Operator
Read-only
Export to file
Yes
Owner and or
participant
Participant
No
Add component
Yes
Owner
No
No
Firmware actions
Yes
No
No
No
Drive replacement
Yes
Owner and or
participant
Participant
No
Service mode
Yes
Owner and or
participant
No
No
MDM
reconfiguration
Yes
Owner and or
participant
No
No
Generate
troubleshooting
bundle
Yes
Owner and or
participant
Participant
Yes
Perform
compliance
upgrades
Yes
Owner
No
No
View compliance
report
Yes
Owner
Participant
Yes
View
Yes
Participant
NOTE: Users
can only view
templates for
which they
have been
granted
permission to
by an
administrator.
Read template
Yes
Participant
N/a
Yes
Create new
template
Yes
No
N/a
No
Edit template
Yes
No
N/a
No
Delete template
Yes
No
N/a
No
View template
details
Yes
Participant
N/a
Yes
Clone template
Yes
No
N/a
No
View
Yes
No
N/a
No
Edit
name/category/
description
Yes
No
N/a
No
Publish template
Yes
No
N/a
No
Delete template
Yes
No
N/a
No
View all settings
Yes
No
N/a
No
Import template
Yes
No
N/a
No
View
Yes
Participant
N/a
Yes
Deploy service
Yes
Participant
N/a
No
Yes
Installing and configuring PowerFlex Manager
189
Feature
Resources
Node Pools
Compliance
Versions
190
Permission
Administrator
Standard
Operator
Read-only
Edit
Yes
No
N/a
No
View all settings
Yes
Participant
N/a
Yes
Delete template
Yes
No
N/a
No
View
Yes
Participant
NOTE: Users
can view
resources that
part of a node
pool for which
they are
granted
permission.
They can also
view common
and shared
resources that
are not part of
a pool.
However,
users with this
role can only
run inventory
update on the
resources.
Participant
Yes
View all resources
Yes
Participant
Participant
Yes
Run discovery
Yes
No
No
No
Remove resources
Yes
No
No
No
Manage or
unmanage
resources
Yes
No
No
No
Run inventory
Yes
Participant
Participant
No
View details (all
tabs)
Yes
Participant
Participant
Yes
Launch resource
element manager
(in details)
Yes
Participant
Participant
No
Perform
compliance
upgrades
Yes
Owner
No
No
View compliance
report
Yes
Owner
Participant
Yes
View
Yes
Participant
N/a
Yes
Create
Yes
No
N/a
No
Edit
Yes
No
N/a
No
Delete
Yes
No
N/a
No
View
Yes
Yes
N/a
Yes
Add
Yes
No
N/a
No
Remove
Yes
No
N/a
No
Installing and configuring PowerFlex Manager
Feature
Settings
Permission
Administrator
Standard
Operator
Read-only
Set as default
Yes
No
N/a
No
Import latest
Yes
No
N/a
No
View bundles
Yes
Yes
N/a
Yes
View
Yes
No
No
Yes
NOTE: Users
cannot view
the Settings
page.
Backup and
Restore
Credential
Management
Getting Started
Jobs
Alerts
Logs
Networks
Compliance and
OS repository
View
Yes
N/a
N/a
Yes
Backup now
Yes
N/a
N/a
No
Restore
Yes
N/a
N/a
No
Edit settings and
details
Yes
N/a
N/a
No
Edit auto schedule
backup
Yes
N/a
N/a
No
View
Yes
N/a
N/a
Yes
Create
Yes
N/a
N/a
No
Edit
Yes
N/a
N/a
No
Delete
Yes
N/a
N/a
No
View
Yes
N/a
N/a
Yes
● Configure
Compliance
● Define
Networks
● Discover
Resources
● Add Existing
Service
● Add a
Template
Yes
N/a
N/a
No
View
Yes
N/a
N/a
Yes
Cancel
Yes
N/a
N/a
No
View
Yes
N/a
N/a
Yes
Acknowledge
Yes
N/a
N/a
No
View
Yes
N/a
N/a
Yes
Export all
Yes
N/a
N/a
No
Purge
Yes
N/a
N/a
No
View
Yes
N/a
N/a
Yes
Define
Yes
N/a
N/a
No
Edit
Yes
N/a
N/a
No
Delete
Yes
N/a
N/a
No
View
Yes
Yes
N/a
Yes
Add repository
Yes
N/a
N/a
No
Installing and configuring PowerFlex Manager
191
Feature
Users
Directory Services
Virtual Appliance
Management
Permission
Administrator
Standard
Operator
Read-only
Remove
Yes
N/a
N/a
No
Set as default
Yes
N/a
N/a
No
Import latest
Yes
N/a
N/a
No
View bundles
Yes
Yes
N/a
Yes
View
Yes
N/a
N/a
Yes
Create
Yes
N/a
N/a
No
Edit
Yes
N/a
N/a
No
Disable/enable
Yes
N/a
N/a
No
Delete
Yes
N/a
N/a
No
Import
Yes
N/a
N/a
No
View
Yes
No
No
Yes
Create
Yes
No
No
No
Edit
Yes
No
No
No
Delete
Yes
No
No
No
View
Yes
N/a
N/a
Yes
Reboot virtual
appliance
Yes
N/a
N/a
No
Update virtual
appliance
Yes
N/a
N/a
No
Generate
troubleshooting
bundle
Yes
N/a
N/a
No
Edit time zone and Yes
NTP settings
N/a
N/a
No
Edit proxy settings Yes
N/a
N/a
No
SSL certificates
Yes
N/a
N/a
No
Generate
certificate request
Yes
N/a
N/a
No
Upload certificate
Yes
N/a
N/a
No
Edit license
Yes
N/a
N/a
No
Creating a user
Perform this task to create a PowerFlex Manager user and assign a role to that user.
Steps
1. On the menu bar, click Settings and click Users.
2. On the Users page, click Create.
3. Enter a unique User Name to identify the user account.
4. Enter a Password that a user enters to access PowerFlex Manager. Confirm the password.
The password length must be between 8 to 32 characters and must include at least one number, one capital letter, one
lowercase letter.
5. Enter the First Name and Last Name of the user.
6. From the Role drop-down list, select one of the following roles:
192
Installing and configuring PowerFlex Manager
● Administrator
● Standard
● Read only
7. Enter the Email address and Phone number for contacting the user.
8. Select Enable User to create the account with an Enabled status, or clear this option to create the account with a
Disabled status.
9. Click Save.
Deleting a user
Perform this procedure to remove an existing PowerFlex Manager user.
Steps
1. On the menu bar, click Settings and click Users.
2. On the Users page, select one or more user accounts to delete.
3. Click Delete.
Click Yes in the warning message to delete the user.
Editing a user
Perform this task to edit a PowerFlex Manager user profile.
Steps
1. On the menu bar, click Settings and click Users.
2. On the Users page, select the user account that you want to edit.
3. Click Edit. For security purpose, confirm your password before editing the user.
4. You can modify the following user account information from this window:
● First Name
● Last Name
● Role
● Email
● Phone
● Enable User
If you select the Enable user check box, the user can log in to PowerFlex Manager. If you disable the check box, the user
cannot log in.
5. Click Save.
Enabling or disabling users
The Enable option allows you to change the user account state to Enabled and the Disable option allows you to change the
user account state to Disabled.
Steps
1. On the menu bar page, click Settings and click Users.
2. On the Users page, select one or more user accounts to enable/disable.
3. In the menu, click Enable or Disable, to update the state to enabled or disabled, as selected.
For an already enabled user account State, the Enable option in the menu is deactivated, and for an already disabled user
account State, the Disable option in the menu is deactivated.
Installing and configuring PowerFlex Manager
193
Directory services
You can create a directory service that PowerFlex Manager can access to import users.
An Active Directory user is authenticated against the specific Active Directory domain to which a user belongs. While logging in
to PowerFlex Manager, the Active Directory user must first enter the directory name to which that user belongs, followed by
the username, for example: domain\username.
The Directory Services page displays the following information about PowerFlex Manager active directories:
● Host IP address
● Name
● Directory type
From this page, you can:
● Create a directory service
● Edit a directory service
● Delete a directory service
Importing Active Directory users
Before importing Active Directory users to PowerFlex Manager, you must create at least one directory service using PowerFlex
Manager. PowerFlex Manager roles are not automatically mapped to Active Directory user roles. You must assign an appropriate
role to each imported user. After the import, these users can log in to PowerFlex Manager.
About this task
The format to log in is: <Directory Service Name>/ <user name>, and type the password.
If an imported user is deleted from Active Directory, that user is not automatically deleted from PowerFlex Manager. The
deleted user cannot log in to PowerFlex Manager, and you must remove the user manually from the user list. Importing an
already imported user does not have any effect. The user role also remains the same.
Steps
1. On the menu bar, click Settings and click Users.
2. On the Users tab, click Import Active Directory Users.
The Import Active Directory Users page is displayed.
3. Select a specific directory source from the Directory Source drop-down list to import the users from the selected directory
source.
4. Select any of the following options from the View menu to filter the search results:
● All—Displays both users and groups
● Users—Displays only users
● Groups—Displays only groups
5. Under the Available Users/Groups section, enter a user or group name in the Find a User/Group field to search for one
or more users or groups in the selected directory.
To perform an explicit search for a particular user or group, type the exact letters of the user or group.
To search for a collection of users or groups, you can use a wildcard (*) character in your search. Wildcards are not
automatically added to search strings. Therefore, you must enter them explicitly.
NOTE:
If possible, do not use a wildcard (*) at the start of a search string, as this search pattern can slow down the
search significantly. For example, avoid a search string such as '*smith'. This type of search can be very slow, and,
sometimes, may cause the search to timeout and return an error. Embedded or trailing wildcards are much faster. For
example, you might improve search performance by entering 'john*' or 'user*abc'.
You can also speed up searches by specifying the lowest point in the AD tree as the base DN for the search.
6. Select the users or group you want to import and click the forward arrow (>>) button.
194
Installing and configuring PowerFlex Manager
The selected users or groups are added to the Users/Groups to be imported section.
7. To assign a role to all the users or groups, select the users or groups and select any of the following roles from the User
Role drop-down list: Read Only, Administrator, Operator, or Standard.
● To apply specific roles, select the role from the Role menu beside the user or group name.
● To view the imported group, select All Groups from the Filter by Group menu.
In a single import operation, if you import a user individually and as part of group, the role that is assigned to the user
individually precedes the role that is assigned to the group.
Minimum VMware vCenter permissions required to support
PowerFlex Manager functionalities in different modes
Use this procedure to create or enable user accounts for three PowerFlex Manager vCenter management modes.
Steps
1. From the VMware vSphere home screen, click Administration and from the Single Sign-On section, click Users and
Groups.
2. Click Add User to create a user account and enter the username and password.
There are three users:
● Monitor
● Lifecycle
● Managed
3. From the root view of the VMware vSphere client, click Administration and under Access Control, click Roles.
a. For the Monitor user, select the following permissions: Profile-driven storage - Profile-driven storage view and
Virtual machine - Read customization specifications (under provisioning).
b. For the Lifecycle user, select the following permissions: Profile-driven storage - Profile-driven storage view,
Virtual machine - Read customization specifications (under provisioning), and Host - Connection, firmware,
maintenance, power, query patch, system management, system resources.
c. Assign a name to the new role.
4. Click Hosts and Clusters and right-click on the VMware vCenter object.
a. Click Add Permission and select the user account.
b. Click the name of the role you created and select Propagate to children.
c. Click OK.
5. From PowerFlex Manager, go to Settings and create a new credential type called vCenter. Ensure the username and
password match the VMware vSphere credentials previously created.
6. From PowerFlex Manager, create a user credential for the vCenter server that matches the account previously created in
VMware vCenter. Add the VMware vCenter server object to the inventory using those credentials from PowerFlex Manager.
Installing and configuring PowerFlex Manager
195
Download