Uploaded by jcockle

PowerScale+Hardware+Concepts-SSP+-+Participant+Guide

advertisement
POWERSCALE
HARDWARE CONCEPTSSSP
PARTICIPANT GUIDE
PARTICIPANT GUIDE
Internal Use - Confidential
PowerScale Hardware Concepts-SSP
Internal Use - Confidential
© Copyright
2020 Dell Inc.
Page i
Table of Contents
Rebranding - Isilon is now PowerScale ................................................................... 2
Rebranding - Isilon is now PowerScale ................................................................................ 3
PowerScale Solutions - Internal ............................................................................... 4
PowerScale Solutions - Internal ........................................................................................... 5
Course Objectives...................................................................................................... 6
Course Objectives................................................................................................................ 7
Installation Engagement............................................................................................ 8
Installation Engagement....................................................................................................... 9
Module Objectives ............................................................................................................. 10
Customer Engagement Responsibility ............................................................................... 11
Physical Tools Requirements ............................................................................................. 12
Installation and Implementation Phases ............................................................................. 14
SolVe Desktop Installation ................................................................................................. 15
Safety Precautions And Considerations ............................................................................. 18
Onsite Do's and Don'ts....................................................................................................... 21
Configuration Guide and PEQ ................................................................................ 22
Configuration Guide and PEQ ............................................................................................ 23
Module Objectives ............................................................................................................. 24
Job Roles ........................................................................................................................... 25
Configuration Guide ........................................................................................................... 27
Configuration Guide Tour ................................................................................................... 28
Pre Engagement Questionnaire ......................................................................................... 33
PEQ Tour ........................................................................................................................... 34
Introduction to PowerScale Nodes ......................................................................... 39
Introduction to PowerScale Nodes ..................................................................................... 40
Module Objectives ............................................................................................................. 41
PowerScale Hardware Overview........................................................................................ 42
PowerScale Hardware Concepts-SSP
Page ii
© Copyright 2020 Dell Inc.
PowerScale Nodes Overview ............................................................................................. 43
PowerScale Node Types.................................................................................................... 44
Gen 6 Hardware Components............................................................................................ 48
Gen 6.5 Hardware Components......................................................................................... 50
Generation 6 Advantages and Terminologies .................................................................... 52
PowerScale Node Tour - Generation 5 .............................................................................. 53
PowerScale Node Tour - Generation 6 .............................................................................. 55
Internal and External Networking ........................................................................... 60
Internal and External Networking ....................................................................................... 61
Module Objectives ............................................................................................................. 62
PowerScale Networking Architecture ................................................................................. 63
Legacy Connectivity ........................................................................................................... 65
F200 and F600 Network Connectivity ................................................................................ 66
PowerScale Architecture - External Network ...................................................................... 67
Breakout Cables ................................................................................................................ 68
Cabling Considerations ...................................................................................................... 69
Cluster Management Tools ..................................................................................... 70
Cluster Management Tools ................................................................................................ 71
Module Objectives ............................................................................................................. 72
OneFS Management Tools ................................................................................................ 73
Serial Console Video ......................................................................................................... 74
Configuration Manager ...................................................................................................... 75
isi config ..................................................................................................................... 77
Web Administration Interface (WebUI) ............................................................................... 78
Command Line Interface (CLI) ........................................................................................... 80
CLI Usage .......................................................................................................................... 82
Front Panel Display............................................................................................................ 83
Course Summary ..................................................................................................... 84
Course Summary ............................................................................................................... 85
PowerScale Hardware Concepts-SSP
Internal Use - Confidential
© Copyright
2020 Dell Inc.
Page iii
Appendix ................................................................................................. 87
Glossary .................................................................................................. 99
PowerScale Hardware Concepts-SSP
Page iv
© Copyright 2020 Dell Inc.
Rebranding - Isilon is now PowerScale
PowerScale Hardware Concepts-SSP
Internal Use - Confidential
© Copyright
2020 Dell Inc.
Page 1
Rebranding - Isilon is now PowerScale
Rebranding - Isilon is now PowerScale
PowerScale Hardware Concepts-SSP
Internal Use - Confidential
Page
2
© Copyright 2020 Dell Inc.
Rebranding - Isilon is now PowerScale
Rebranding - Isilon is now PowerScale
Important: In mid-2020 Isilon launched a new hardware platform, the
F200 and F600 branded as PowerScale. Over time the Isilon brand
will convert to the new platforms PowerScale branding. In the
meantime you will continue to see Isilon and PowerScale used
interchangeably, including within this course and any lab activities.
OneFS CLI isi commands, command syntax, and man pages may
have instances of "Isilon".
Videos associated with the course may still use the "Isilon" brand.
Resources such as white papers, troubleshooting guides, other
technical documentation, community pages, blog posts, and others
will continue to use the "Isilon" brand.
The rebranding initiative is an iterative process and rebranding all
instances of "Isilon" to "PowerScale" may take some time.
PowerScale Hardware Concepts-SSP
Internal Use - Confidential
© Copyright
2020 Dell Inc.
Page 3
PowerScale Solutions - Internal
PowerScale Solutions - Internal
PowerScale Hardware Concepts-SSP
Page 4
© Copyright 2020 Dell Inc.
PowerScale Solutions - Internal
PowerScale Solutions - Internal
The graphic shows the PowerScale Solutions Expert certification track. You can
leverage the Dell Technologies Proven Professional program to realize your full
potential. A combination of technology-focused and role-based training and exams
to cover concepts and principles as well as the full range of Dell Technologies'
hardware, software, and solutions. You can accelerate your career and your
organization’s capabilities.
PowerScale Solutions
PowerScale Advanced Administration
(VC,C)
PowerScale Advanced Disaster Recovery
(VC,C)
(Knowledge and Experience based Exam)
Implementation Engineer
PowerScale Concepts
(ODC)
Technology Architect
Platform Engineer
PowerScale Concepts
PowerScale Concepts
(ODC)
PowerScale Hardware Concepts
(ODC)
(ODC)
PowerScale Hardware Installation
PowerScale Administration
PowerScale Solutions Design
(ODC,VC,C)
(ODC)
PowerScale Implementation
Information Storage and Management
(C)
- Classroom
(VC)
- Virtual Classroom
(ODC)
PowerScale Hardware Maintenance
(ODC)
(ODC)
(ODC, VC, C)
(ODC) - On Demand Course
For more information, visit: http://dell.com/certification
PowerScale Hardware Concepts-SSP
© Copyright 2020 Dell Inc.
Page 5
Course Objectives
Course Objectives
PowerScale Hardware Concepts-SSP
Internal Use - Confidential
Page
6
© Copyright 2020 Dell Inc.
Course Objectives
Course Objectives
After completion of this course, you will be able to:
→
→
→
→
→
Discuss installation engagement actions.
Analyze the PowerScale Configuration Guide.
Describe PowerScale nodes.
Identify internal and external networking components.
Explain the cluster management tools.
PowerScale Hardware Concepts-SSP
Internal Use - Confidential
© Copyright
2020 Dell Inc.
Page 7
Installation Engagement
Installation Engagement
PowerScale Hardware Concepts-SSP
Internal Use - Confidential
Page
8
© Copyright 2020 Dell Inc.
Installation Engagement
Installation Engagement
PowerScale Hardware Concepts-SSP
Internal Use - Confidential
© Copyright
2020 Dell Inc.
Page 9
Installation Engagement
Module Objectives
After completing this lesson, you will be able to:
•
Describe the Customer Engineer and Implementation Specialist roles and
responsibilities.
•
Explain the customer engagement procedures.
PowerScale Hardware Concepts-SSP
Internal Use - Confidential
Page
10
© Copyright 2020 Dell Inc.
Installation Engagement
Customer Engagement Responsibility
There are five steps or phases for acquiring a PowerScale cluster. Each phase has
a separate team that engages with the customer. The install and implementation
phase of PowerScale cluster begins after the product purchase, shipment, and
delivery to the customer site. This is after the design phase where a Solution
Architect (SA) works with the customer, determine their specific needs, and
documents what the solution looks like. The result of the SA engagement is
PowerScale Configuration Guide that the Customer Engineers (CE) and
Implementation Specialist (IS) uses to install and configure the cluster. Before the
install phase, all design decisions have been made.
Hardware installation
Hardware upgrade service
Work with sales, customer services, project managers, technical
support, and the customer, to ensure a smooth service delivery
Create the PowerScale cluster
Verify hardware installation is successful
Note: The Pre Engagement Questionnaire (PEQ) is now replacement
for PowerScale Configuration Guide.
PowerScale Hardware Concepts-SSP
Internal Use - Confidential
© Copyright
2020 Dell Inc.
Page 11
Installation Engagement
Physical Tools Requirements
Shown in the graphic are suggested tools for a typical installation.
3
4
2
1
5
1: The cables required are a single CAT5/CAT6 network patch cord, to directly
connect your laptop to the node. USB-to-serial adapter, preferably one that uses
the Prolific 2303 Chipset.
2: DB9-to-DB9 Null modem cable (female/female).
3: The software required or recommended is:
•
Latest recommended OneFS release
•
Latest cluster firmware
•
Latest drive firmware package
•
SolVe Desktop
•
WinSCP - copies files to and from cluster
•
PuTTy - serial access cluster via SSH
4: Basic hand tools: screwdrivers (flat-head and Phillips), wire cutters, anti-static
wrist strap.
5: Cable ties/Velcro strips for cable management and routing.
PowerScale Hardware Concepts-SSP
Internal Use - Confidential
Page
12
© Copyright 2020 Dell Inc.
Installation Engagement
Resources: Links to download the WinSCP and PuTTy software.
Other softwares can be downloaded at support.emc.com
PowerScale Hardware Concepts-SSP
Internal Use - Confidential
© Copyright
2020 Dell Inc.
Page 13
Installation Engagement
Installation and Implementation Phases
There are three distinct steps in the install and implementation phase: Install, Build,
and Implement.
1: During the install, the components are unpacked and racked, and switches are
rack mounted. Nodes are connected to the back-end switches, power is added,
and front-end network cables are connected between the cluster and customer
network. The Customer Engineer or CE performs these tasks.
2: Depending on the role, the CE may perform the cluster build also. The cluster
build is achieved when the system is powered on, the PowerScale Configuration
Wizard has been launched and the information added.
3: In some regions, running the Configuration Wizard may be the sole responsibility
of the IS. After the cluster is built, the IS configures the features of OneFS as
written in the Configuration Guide.
PowerScale Hardware Concepts-SSP
Internal Use - Confidential
Page
14
© Copyright 2020 Dell Inc.
Installation Engagement
SolVe Desktop Installation
Before you arrive at a client site, remember to read the call notes and follow the
processes that are detailed in them. Check if there are any special instructions from
PowerScale Technical Support that you must follow, and bring the latest SolVe
Desktop procedures on your laptop.
SolVe Desktop has been revised and updated to SolVe Online. It is a knowledge
management-led standard procedure for DELL-EMC field, service partners, and
customers.
1: Download SolVe Desktop application on the system. Go to the Tools & Sites
section, choose SolVe. And select SolVe Desktop Executable. Depending on the
browser used, you may be presented with security dialogue boxes. Take the
needed actions to launch the executable.
2:
PowerScale Hardware Concepts-SSP
Internal Use - Confidential
© Copyright
2020 Dell Inc.
Page 15
Installation Engagement
Click through the Setup wizard and then select Install. Clicking Finish launches
the SolVe Desktop. SolVe must be authorized for use. Select OK. A few general
items1.
3: From the menu, select Authorize and download the list of available products.
Adhere to the instructions shown, that is to leave SolVe open, enter credentials,
this is via SSO, and open the keychain file. Select OK. And then go to downloads
and open the keychain file.
4:
Next are the Release Notes. Review and then close this window. Bring back the
SolVe. Notice the dialog2 in the lower left indicating the keychain is loaded, that
means you are authorized, and content is updated. Now, scroll down, and click
PowerScale to gather the PowrScale content.
5: Click OK. Again, note the progress in the lower left. Once the download is
complete, you will see that the PowerScale image has changed. Tools that are
downloaded appear in the upper left corner of the screen without the green arrow
present.
6: Now you can click PowerScale and view the available procedures. If updates are
available for download, you will see an information icon, click the icon, and approve
the updated content download.
1
Notice the dialog in the lower left showing the version. This area also shows the
progress when upgrading and downloading content. Also notice in the lower right
the service topics. Once connected, many of articles that are shown may not be
relevant to PowerScale. There is a filtering option in the menu to receive the
articles that pertain to a specific product.
2
The icons with a green arrow indicate that the user must click the icon in order to
download the tool.
PowerScale Hardware Concepts-SSP
Internal Use - Confidential
Page
16
© Copyright 2020 Dell Inc.
Installation Engagement
Resource: Partners3 can search through the Dell EMC partner portal. SolVe
Desktop can be downloaded from EMC support portal. SolVe Online can be
installed through SolVe Online portal. Click here for an overview on SolVe Desktop
and SolVe Online.
3
The view is dependent upon Partner Type. A service partner sees what an
employee sees, a direct sales partner sees what a customer sees, and an
ASP/ASN partner sees products depending upon credentials.
PowerScale Hardware Concepts-SSP
Internal Use - Confidential
© Copyright
2020 Dell Inc.
Page 17
Installation Engagement
Safety Precautions And Considerations
When working with PowerScale equipment, it is critical to ensure you adhere to the
following precautions. Failure to do so may result in electric shock, bodily injury,
fire, damage to PowerScale systems equipment, or loss of data.Electrostatic
discharge is a major cause of damage to electronic components and potentially
dangerous to the installer. Review the safety precautions and considerations4
before the installation.
4
1
5
2
6
3
4
Failure to heed these warnings may also void the product warranty. Only trained
and qualified personnel should install or replace equipment. Select the button
options for specific information. Always refer to the current Site Preparation and
Planning Guide for proper procedures and environmental information.
PowerScale Hardware Concepts-SSP
Internal Use - Confidential
Page
18
© Copyright 2020 Dell Inc.
Installation Engagement
1: The AC supply circuit for PowerScale nodes must be capable of supplying the
total current specified on the label of the node. All AC power supply connections
must be properly grounded, and connections that are not directly connected to the
branch circuit, such as nodes that are connected to a power strip, must also be
properly grounded. Do not overload the branch circuit of the AC supply that
provides power to the rack holding PowerScale nodes. The total rack load should
not exceed 80% of the branch circuit rating. For high availability, the left and right
sides of any rack must receive power from separate branch feed circuits. To help
protect the system from sudden increases or decreases in electrical power, use a
surge suppressor, line conditioner, or uninterruptible power supply or UPS.
2: Beyond precautions of working with electricity, it is also critical to ensure proper
cooling. Proper airflow must be provided to all PowerScale equipment. The ambient
temperature of the environment in which PowerScale Gen 5 nodes operate should
not exceed the maximum rated ambient temperature of 35°Celsius or
95°Fahrenheit. Gen 6 nodes have an ASHRAE (American Society of Heating,
Refrigerating and Air-Conditioning Engineers) designation of A3, which enables the
nodes to operate in environments with ambient temperatures from 5 degrees-up to
40 degrees Celsius for limited periods of time.
3: If you install PowerScale nodes in a rack that is not bolted to the floor, use both
front and side stabilizers. Installing PowerScale nodes in an unbolted rack without
these stabilizers could cause the rack to tip over, potentially resulting in bodily
injury. Use only approved replacement parts and equipment.
4: Racks can be installed in raised or non-raised floor data centers capable of
supporting that system. It is your responsibility to ensure that data center floor can
support the weight of the system. Although the cluster may weigh less, floor
support rated at a minimum of 2,600 lbs (1,180 kg) will accommodate a fully
populated Gen 5 rack. A fully populated rack with A2000 chassis’ weighs about
3,500 lbs (1,590 kg). If the floor is rated at less than 3,500 lbs then additional care
and planning needs to be taken. Some data center floors have different static load
vs. dynamic (rolling) load specifications, as well as sectional weight and load point
limits. This becomes important while moving pre-racked solutions around the data
center.
5: To avoid personal injury or damage to the hardware, always use two people to
lift or move a node or chassis. A Gen 6 chassis can weigh in excess of 200 lbs. It’s
recommended to use a lift similar to the one shown to install the components into
the rack. If a lift is not available, you must remove all drive sleds and compute
PowerScale Hardware Concepts-SSP
Internal Use - Confidential
© Copyright
2020 Dell Inc.
Page 19
Installation Engagement
modules from the chassis before lifting. Even when lifting an empty chassis, never
attempt to lift and install with fewer than two people.
6:
Electrostatic Discharge
PowerScale Hardware Concepts-SSP
Internal Use - Confidential
Page
20
© Copyright 2020 Dell Inc.
Installation Engagement
Onsite Do's and Don'ts
When onsite, remember to represent Dell EMC and yourself in the best possible
light. Do not change the Configuration Guide or PEQ without the approval of the
design team. Any approved changes should be meticulously tracked and any
appropriate change control processes should be followed. Remember to bring your
documentation and copies to provide to the customer.
Before you leave a client site, ensure you:
•
Test the device function and connectivity by following documented test
procedures in the training material and support guides.
•
Escalate any client satisfaction issues or severity level 1 situations to the next
level of support.
•
Follow up on any outstanding commitments that are made to the client.
•
Contact PowerSacle support to report the call status.
•
Ensure that the product is registered and that the Install Base Record is
updated.
Tip: To make an Install Base entry or change browse EMC website.
PowerScale Hardware Concepts-SSP
Internal Use - Confidential
© Copyright
2020 Dell Inc.
Page 21
Configuration Guide and PEQ
Configuration Guide and PEQ
PowerScale Hardware Concepts-SSP
Internal Use - Confidential
Page
22
© Copyright 2020 Dell Inc.
Configuration Guide and PEQ
Configuration Guide and PEQ
PowerScale Hardware Concepts-SSP
Internal Use - Confidential
© Copyright
2020 Dell Inc.
Page 23
Configuration Guide and PEQ
Module Objectives
After completing this lesson, you will be able to:
•
Identify the job roles of people involved in the implementation.
•
Explain the use of Configuration Guide and PEQ in implementation.
PowerScale Hardware Concepts-SSP
Internal Use - Confidential
Page
24
© Copyright 2020 Dell Inc.
Configuration Guide and PEQ
Job Roles
There are four job roles that are associated with PowerScale hardware installation
and implementation process.
1: Customer Engineer (CE):
•
Performs hardware installation and hardware upgrade services.
•
Creates PowerScale cluster.
•
Verifies that hardware installation is successful.
2: Implementation Specialist (IS):
•
Has knowledge of storage system.
•
Implements cluster.
3: Project Manager (PM):
•
First contact of customers for service engagement.
•
oBuilds delivery schedule.
•
Coordinates services delivery with customer and service personnel.
•
Monitors progress of service delivery.
PowerScale Hardware Concepts-SSP
Internal Use - Confidential
© Copyright
2020 Dell Inc.
Page 25
Configuration Guide and PEQ
4: Solutions Architect (SA):
•
Develops implementation plan.
•
Designs configuration.
PowerScale Hardware Concepts-SSP
Internal Use - Confidential
Page
26
© Copyright 2020 Dell Inc.
Configuration Guide and PEQ
Configuration Guide
The PowerScale Configuration Guide is an Excel spreadsheet that contains all the
information that is gathered and discussed during the design phase. It contains
every detail necessary to build the PowerScale cluster, IP address ranges, subnets
pools, access zones, SmartConnect Zones, and all other purchased features within
OneFS. The document helps the CE create the cluster and the IS to implement the
cluster to the specifications of the design team.
Configuration guide is used for:
•
Post rack-and-stack tool.
•
Used for install and implementation.
•
Contains client, network, and workflow information.
•
Don't modify without escalation to SA.
PowerScale Hardware Concepts-SSP
Internal Use - Confidential
© Copyright
2020 Dell Inc.
Page 27
Configuration Guide and PEQ
Configuration Guide Tour
The config guide details the configurable option that the customer must get their
cluster up and running. For demonstration purposes, the config guide is populated
using an PowerScale cluster acquisition from Diverse Genonmics5. Typically the
Solutions Architect with the customer populates the config guide.
The config guide is an Excel spreadsheet consisting of seven tabs: Customer
Information, Design Intent, Topology, Core, Basic, Extended, DNS Records,
and Appendix.
1
2
3
4
5
6
7
1: This sheet contains client-specific contact and site data. This is populated with
fictional information. This sheet provides keys to the licensed features being
implemented. Also note the config guide provides additional information for some of
the cells.
5
Diverse Genonmics is a fictional company and as seen here, the contact names
are fictional also.
PowerScale Hardware Concepts-SSP
Internal Use - Confidential
Page
28
© Copyright 2020 Dell Inc.
Configuration Guide and PEQ
2: The Design Intent contains information about the customer workflow and
access. Here we also see file size breakdown. Diverse Genonmics accesses the
cluster via CIFS or SMB and NFS.
3: Moving to the Topology tab we see a simple diagram of the PowerScale
solution. This is for demonstration, but a real-life graphic may include network
PowerScale Hardware Concepts-SSP
Internal Use - Confidential
© Copyright
2020 Dell Inc.
Page 29
Configuration Guide and PEQ
routers of client, switches, patch panels, UPS’, so on This is information that can be
documented and used as a reference or for future troubleshooting.
4: The core sheet has the information that is needed when creating a cluster using
the Configuration Wizard and configuring the cluster after the cluster is created. It
contains the back-end and front-end IP addresses, DNS server, passwords, cluster
name, encoding, MTU, internal IP ranges, external ranges, gateway configuration,
and DNS server for the Configuration Wizard fields. Core also provides the network
information of customer that the IS must implement the cluster. The PowerScale
Installation course provides a hands-on exercise using the information to build a
cluster.
PowerScale Hardware Concepts-SSP
Internal Use - Confidential
Page
30
© Copyright 2020 Dell Inc.
Configuration Guide and PEQ
5: The Basic and Extended tabs deal with the specific licensable or configurable
features with node pools and tiers within OneFS: SmartPools settings, tiers, any file
extensions that the cluster filters, SmartQuotas, snapshots, so on. The Extended
sheet has settings for event auditing, RBAC, SmartDedupe, SmartLock, and antivirus.
Basic
Extended
PowerScale Hardware Concepts-SSP
Internal Use - Confidential
© Copyright
2020 Dell Inc.
Page 31
Configuration Guide and PEQ
6: The DNS information is required when configuring SmartConnect.
7: The Appendix sheet provides some useful links and may include notes for other
relevant information that is not covered in the other sheets such as data center
floor location, emergency contact, customer operational hours, so on..
PowerScale Hardware Concepts-SSP
Internal Use - Confidential
Page
32
© Copyright 2020 Dell Inc.
Configuration Guide and PEQ
Pre Engagement Questionnaire
The PowerScale PEQ is the replacement for the Configuration Guide. The stated
purpose of the PEQ is to document the Professional Services project installation
parameters and to facilitate the communication between the responsible resources.
The PEQ is designed to incorporate the process workflow and ease hand-off from
Pre-Sales to Delivery. It is a delivery document, which benefits other roles, helps
define roles and responsibilities and is not the same as the Qualifier.
PowerScale Hardware Concepts-SSP
Internal Use - Confidential
© Copyright
2020 Dell Inc.
Page 33
Configuration Guide and PEQ
PEQ Tour
The PEQ is an Excel spreadsheet consisting of eight tabs: Cover, Engagement
Details(SE), Solution Diagram(SE), Checklist(PM), Project Details(PM),
Hardware, Cluster, and Reference.
1
2
3
4
5
6
7
8
1: To start the application, open the PEQ spreadsheet tool. The first tab that is
display is the Cover tab. The Cover tab contains the creation date and the
customer name.
2: Begin filling out the document from upper left to bottom right. SE shares the
Customer contact information and describes at a high level what the project team is
expected to do at each site, using the provided drop-down menus. The SE also
provides general customer environment information, such as Operating Systems in
use, backup apps and protocols, and any specialty licenses sold. Accurate and
complete customer information is important to the smooth and efficient planning
process.
PowerScale Hardware Concepts-SSP
Internal Use - Confidential
Page
34
© Copyright 2020 Dell Inc.
Configuration Guide and PEQ
3: On the Solution Diagram tab, The SE provides the solution diagrams or
topologies that are used during the presales cycle.
4: Project Manager begins with the Engagement Checklist tab to help them plan
project tasks with a great deal of granularity.
PowerScale Hardware Concepts-SSP
Internal Use - Confidential
© Copyright
2020 Dell Inc.
Page 35
Configuration Guide and PEQ
5: It is also the responsibility of the Project Manager to maintain the Data Center
readiness information about the Project Details tab. Here the PM focuses on
verifying that each site has met the power, cooling, networking, and other
prerequisites before scheduling resources. The PM should also complete the
Administrative Details section with team member information, project Id details, and
an optional timeline.
6: The hardware tab shows the physical connections parameters and some basic
logical parameters necessary to “stand up” the cluster. When multiple node types
are selected and defined on the Engagement Details tab, the Cluster Details
PowerScale Hardware Concepts-SSP
Internal Use - Confidential
Page
36
© Copyright 2020 Dell Inc.
Configuration Guide and PEQ
section includes a complete listing of the extended Node Details and Front-End
Switch details.
7: The Cluster tab represents a single clusters and its logical configuration. Each
section on the Cluster Tab has a designated number (Yellow Chevron). The
numbers represent the listed priority of that section and should be completed in
order starting with number one. This tab is split into sections that describe different
features. These tabs are enabled through the questions in the Licensing \ Features
section.
PowerScale Hardware Concepts-SSP
Internal Use - Confidential
© Copyright
2020 Dell Inc.
Page 37
Configuration Guide and PEQ
8: The Reference Tab provides frequently used content, cross‐references, and
checklists and other items that assist the delivery resources throughout the delivery
engagement. It is intended quick reference not as the authoritative source of that
information.
PowerScale Hardware Concepts-SSP
Internal Use - Confidential
Page
38
© Copyright 2020 Dell Inc.
Introduction to PowerScale Nodes
Introduction to PowerScale Nodes
PowerScale Hardware Concepts-SSP
© Copyright 2020 Dell Inc.
Page 39
Introduction to PowerScale Nodes
Introduction to PowerScale Nodes
PowerScale Hardware Concepts-SSP
Page 40
© Copyright 2020 Dell Inc.
Introduction to PowerScale Nodes
Module Objectives
After completing this lesson, you will be able to:
•
Describe node naming conventions.
•
Identify each PowerScale node series.
•
Identify PowerScale node components.
PowerScale Hardware Concepts-SSP
© Copyright 2020 Dell Inc.
Page 41
Introduction to PowerScale Nodes
PowerScale Hardware Overview
Nodes combine to create a cluster. Each cluster
behaves as a single, central storage system.
PowerScale is designed for large volumes of
unstructured data. PowerScale has multiple servers
that are called nodes.
PowerScale includes all-flash, hybrid, and archive
storage systems.
Dual chassis, 8 node Generation
6 (or Gen 6) cluster
Gen 5 highlights.
Gen 6 highlights.6
Gen 6.5 highlights.7
6
The Gen 6 platform reduces the data center rack footprints with support for four
nodes in a single 4U chassis. It enables enterprise to take on new and more
demanding unstructured data applications. The Gen 6 can store, manage, and
protect massively large datasets with ease. With the Gen 6, enterprises can gain
new levels of efficiency and achieve faster business outcomes.
7
The ideal use cases for Gen 6.5 (F200 and F600) is remote office/back office,
factory floors, IoT, and retail. Gen 6.5 also targets smaller companies in the core
verticals, and partner solutions, including OEM. The key advantages are low entry
price points and the flexibility to add nodes individually, as opposed to a chassis/2
node minimum for Gen 6.
PowerScale Hardware Concepts-SSP
Page 42
© Copyright 2020 Dell Inc.
Introduction to PowerScale Nodes
PowerScale Nodes Overview
Generation 6 (or Gen 6) chasis and Generation 6.5 nodes
The design goal for the PowerScale nodes are to keep the simple ideology of NAS,
provide the agility of the cloud, and the cost of commodity.
Storage nodes are peers.
The Gen 6x family has different offerings that are based on the need for
performance and capacity. As Gen 6 is a modular architecture, you can scale out
compute and capacity separately. All the nodes are powered by OneFS.
PowerScale Hardware Concepts-SSP
© Copyright 2020 Dell Inc.
Page 43
Introduction to PowerScale Nodes
PowerScale Node Types
The graphic shows the target workflows for each PowerScale node.
1: The Gen 5 portfolio includes five storage nodes and two accelerator nodes. A
storage node includes the following components in a 2U or 4U rack-mountable
chassis with an LCD front panel: CPUs, RAM, NVRAM, network interfaces,
InfiniBand adapters, disk controllers, and storage media.
Gen 5 consists of following five storage nodes:
•
A-Series8
8
The A-Series consists of two separate nodes with two different functions. The
A100 performance accelerator adds CPU and memory resources without adding
storage capacity. The A100 Backup Accelerator allows you to perform backups
PowerScale Hardware Concepts-SSP
Page 44
© Copyright 2020 Dell Inc.
Introduction to PowerScale Nodes
•
S-Series9
•
X-Series10
•
NL/HD-Series11
S-Series and X-Series nodes can be equipped with SSD media. The SSDs can
be used to hold file system metadata, which provides improved performance for
metadata intensive operations, while improving overall latency. They can also
directly to a backup server or tape array over a Fibre Channel connection, without
sending data over the front-end network. Click the information buttons for more
information.
9
The S-Series targets IOPS-intensive, random access, file-based applications. The
S-Series node excels in environments where access to random data needs to be
fast.
10
The X series achieves a balance between large capacity and high-performance
storage. These nodes are also best for high concurrency applications, where many
people have to access a file at the same time.
11
The NL (for Nearline) and HD (for High-Density) nodes are primarily used for
large data storage. The NL-Series nodes are used for active archival, and the HD
nodes for deep archival workloads. NLs and HDs are appropriate when the data
stored does not change often and is only infrequently accessed.
PowerScale Hardware Concepts-SSP
© Copyright 2020 Dell Inc.
Page 45
Introduction to PowerScale Nodes
be configured as L3 cache to provide faster access to frequently accessed data
stored on the cluster.
2: The Gen 6 platform provides following offerings. Previous generations of
PowerScale nodes come in 1U, 2U, and 4U form factors. Gen 6 has a modular
architecture, with 4 nodes fitting into a single 4U chassis.
•
F-Series
•
H-Series
•
A-series
3: Gen 6.5 requires a minimum of three nodes to form a cluster. You can add
single nodes to the cluster. The F600 and F200 are a 1U form factor and based on
the R640 architecture.
•
F60012
•
F20013
Mid-level All Flash Array 1U PE server with 10 (8 usable) x 2.5” drive bays,
enterprise NVMe SSDs (RI, 1DWPD), data reduction standard. Front End
networking options for 10/25 GbE or 40/100 GbE and 100 GbE Back End. Also
called as Cobalt Nodes.
12
PowerScale Hardware Concepts-SSP
Page 46
© Copyright 2020 Dell Inc.
Introduction to PowerScale Nodes
4: There is no 1-to-1 mapping between Gen 5 nodes and Gen 6 nodes, meaning
that for example an S210 node may not perform identically to an F800 node in the
same workflows. With Gen 6, there are new tiers of performance. The following
information can help identify the positioning of each node series relative to each
other. Note that this table is only a guideline.
Entry-level All Flash Array 1U PE server with 4 x 3.5” drive bays (w/ 2.5” drive
trays), enterprise SAS SSDs (RI, 1DWPD), data reduction standard. 10/25 GbE
Front/Back End networking. Also called as Sonic Nodes.
13
PowerScale Hardware Concepts-SSP
© Copyright 2020 Dell Inc.
Page 47
Introduction to PowerScale Nodes
Gen 6 Hardware Components
Gen 6 requires a minimum of four nodes to form a cluster. You must add nodes to
the cluster in pairs.
The chassis holds four compute nodes and 20 drive sled slots.
Both compute modules in a node pair power-on immediately when one of the
nodes is connected to a power source.
Gen 6 chassis
1
4
10
9
8
2
6
3
7
5
1: The compute module bay of the two nodes make up one node pair. Scaling out a
cluster with Gen 6 nodes is done by adding more node pairs.
2: Each Gen 6 node provides two ports for front-end connectivity. The connectivity
options for clients and applications are 10 GbE, 25 GbE, and 40 GbE.
3: Each node can have 1 or 2 SSDs that are used as L3 cache, global namespace
acceleration (GNA), or other SSD strategies.
4: Each Gen 6 nodes provides two ports for back-end connectivity. A Gen 6 node
supports 10 GbE, 40 GbE, and InfiniBand.
5: Power supply unit - Peer node redundancy: When a compute module power
supply failure takes place, the power supply from the peer compute module in the
node pair will temporarily provide power to both nodes.
PowerScale Hardware Concepts-SSP
Page 48
© Copyright 2020 Dell Inc.
Introduction to PowerScale Nodes
6: Each Node has five drive sleds. Depending on the length of the chassis and type
of the drive, each node can handle up to 30 drives or as few as 15.
7: Disks in a sled are all the same type.
8: The sled can be either a short sled or a long sled. The types are:
•
Long Sled - 4 drives of size 3.5"
•
Short Sled - 3 drives of size 3.5"
•
Short Sled - 3 or 6 drives of size 2.5"
9: The chassis comes in two different depths, the normal depth is about 37 inches
and the deep chassis is about 40 inches.
10: Large journals offer flexibility in determining when data should be moved to the
disk. Each node has a dedicated M.2 vault drive for the journal. A node mirrors
their journal to its peer node. The node writes the journal contents to the vault when
a power loss occurs. A backup battery helps maintain power while data is stored in
the vault.
PowerScale Hardware Concepts-SSP
© Copyright 2020 Dell Inc.
Page 49
Introduction to PowerScale Nodes
Gen 6.5 Hardware Components
Gen 6.5 requires a minimum of three nodes to form a cluster. You can add single
nodes to the cluster. The F600 and F200 are a 1U form factor and based on the
R640 architecture.
Graphic shows F200 or F600 node pool.
6
1
5
8
2
3
4
7
1: Scaling out an F200 or an F600 node pool only requires adding one node.
2: For front-end connectivity, the F600 uses the PCIe slot 3.
3: Each Gen F200 and F600 node provides two ports for backend connectivity. The
PCIe slot 1 is used.
4: Redundant power supply units - When a power supply fails, the secondary
power supply in the node provides power. Power is supplied to the system equally
from both PSUs when the Hot Spare feature is disabled. Hot Spare is configured
using the iDRAC settings.
5: Disks in a node are all the same type. Each F200 node has four SAS SSDs.
6: The nodes come in two different 1U models, the F200 and F600. You need three
like nodes to form a cluster.
7: The F200 front-end connectivity uses the rack network daughter card (rNDC).
8: Each F600 node has 8 NVMe SSDs.
PowerScale Hardware Concepts-SSP
Page 50
© Copyright 2020 Dell Inc.
Introduction to PowerScale Nodes
Important: The F600 nodes have a 4-port 1 GB NIC in the rNDC slot.
OneFS does not support this NIC on the F600.
PowerScale Hardware Concepts-SSP
© Copyright 2020 Dell Inc.
Page 51
Introduction to PowerScale Nodes
Generation 6 Advantages and Terminologies
Generation 6 4U Node
Advantages:
Gen 6 provides flexibility. From a customer perspective, it allows for easier
planning. Each chassis requires 4U in the rack, with the same cabling and a higher
storage density in a smaller data center footprint. It should be noted that this also
means that there is four times as much cabling across the Gen 6 4U chassis
populated with four nodes. Customers can select the ideal storage to compute ratio
for their workflow.
Generation 6 Terminologies
PowerScale Hardware Concepts-SSP
Page 52
© Copyright 2020 Dell Inc.
Introduction to PowerScale Nodes
PowerScale Node Tour - Generation 5
Gen 5 Front
The front panel protects the node drives. A solid blue light indicates that the node is
healthy, joined to the cluster, and that there are no critical alerts. You can perform
various administrative tasks using the control panel. The X410, like most other 4U
nodes, contains 24 drives in the front.
Gen 5 Inside
Movie:
The web version of this content contains a movie.
Audio Script for Video
Now, let’s take a look at the inside of the Gen 5 node.
The air baffle and cross bracket cover the CPU(s) and memory. At the back of the
node are the PCIE cards. The SAS controller card facilitates moving data to and
from the drives in the node. The boot drive controller card includes two boot drives
that contain the OneFS operating system. The NVRAM card contains the file
system journal. The 10 Gigabit Ethernet card and InfiniBand card provide ports to
connect the front and back end networks respectively.
The X410 contains two 8-core Intel Xeon processors. The dual inline memory
modules (or DIMMs) are located in banks of four on either side of each CPU. On
the left side of the node is the intrusion switch. If for some reason the top panel of
the node is not present while the node is powered on, the switch sends a signal to
PowerScale Hardware Concepts-SSP
© Copyright 2020 Dell Inc.
Page 53
Introduction to PowerScale Nodes
OneFS and the node is put in a read-only state to help protect the integrity of the
cluster file system.
Towards the front are three fans to cool the node. There are also two batteries that
provide power for the NVRAM card so that it can transfer the contents of the file
system journal to flash memory in case of an unexpected power loss.
Gen 5 Back Side
Movie:
The web version of this content contains a movie.
Audio Script for Video
On the back of the X410 node, there are 12 drives, protected by an EMI shield.
There are also two power supplies and a power button next to them. Press the
power button to power a node up after it’s been properly shut down. Do not press
the power button to turn off the node unless instructed to do so by Isilon Support.
LEDs show the power input status on top, the output status in the middle, and fault
indicator on the bottom. When the node is operating normally, the power input and
output LEDs will be green. The power output LED will not be illuminated if the node
is powered down. If there is an issue with the power supply, the alert indicator LED
will be orange.
Also on the back of the node are several connection ports. There’s a serial port for
direct console access to the node, four USB ports for use only as directed by EMC
Isilon Technical Support, two gigabit Ethernet ports and two 10 gigabit Ethernet
ports for client connections (also known as the front end network), two InfiniBand
ports for node-to-node connections (also known as the back-end network), and on
some cases a manufacturer debug port.
PowerScale Hardware Concepts-SSP
Page 54
© Copyright 2020 Dell Inc.
Introduction to PowerScale Nodes
PowerScale Node Tour - Generation 6
Gen 6 Chassis
All Gen 6 chassis come with the front panel and the front panel display module.
The front panel covers the drive sleds while allowing access to the display.
Movie:
The web version of this content contains a movie.
Audio Script for Video
This demonstration takes a tour of the Gen 6 front panel display, drive sleds, and
an outside look at the node’s compute modules. We’ll focus on identifying
components and indicator function.
Front Panel Display
We’ll start the tour on the front panel display. This allows various administrative
tasks and provides alerts. There are 5 navigation buttons that let the administrator
select each node to administer. There are 4 node status indicators. If a node’s
status light indicator is yellow, it indicates a fault with the corresponding node. The
product badges indicate the types of nodes installed in the chassis. Only two
badges are necessary because nodes can only be installed in matched adjacent
node pairs. The front panel display is hinged to allow access to the drive sleds it
covers, and contains LEDs to help the administrator see the status of each node.
Sleds
Now, taking the front bezel off the chassis and you will see the drive sleds for the
nodes. The Gen 6 chassis has 20 total drive sled slots that can be individually
serviced, but only one sled per node can be safely removed at a time. The graphic
shows that each node is paired with 5 drive sleds.
The status lights on the face of the sled indicate whether the sled is currently in
service, and whether the sled contains a failing drive. The service request button
informs the node that the sled needs to be removed, allowing the node to prepare it
for removal by moving key boot information away from drives in that sled. This
temporarily suspends the drives in the sled from the cluster file system, and then
spins them down. This is done to maximize survivability in the event of further
PowerScale Hardware Concepts-SSP
© Copyright 2020 Dell Inc.
Page 55
Introduction to PowerScale Nodes
failures, and protect the cluster file system from the effect of having several drives
temporarily go missing. The do-not-remove light blinks while the sled is being
prepared for removal, and then turns off when it is ready. We’ll see this here.
The sleds come in different types. First, when configured for nodes that support
3.5" drives, there are 3 drives per sled, as shown here, equaling 15 drives per
node, making 60 drives per chassis. The second type is a longer sleds that holds
four 3.5” drives. This is used in the deep archive, deep rack chassis for A2000
nodes. The long sleds have 20 drives per node, for up to 80 3.5" drives per
chassis.
In the 3.5" drive sleds, the yellow LED drive fault lights are on the paddle cards
attached to the drives, and they are also visible through the cover of the drive sled
as indicated here. The long sled has 4 LED viewing locations.
The third type of sled applies to nodes supporting 2.5" drives. The 2.5” drive sleds
can have 3 or 6 drives per sled (as shown), 15 or 30 drives per node, making 60 or
120 drives per fully populated chassis.
Internally to the 2.5" sled, there are individual fault lights for each drive. The yellow
LED associated with each drive is visible through holes in the top cover of the sled
so that you can see which drive needs replacement. The LED will stay on for about
10 minutes while the sled is out of the chassis.
Compute
When we look at the back we see the four nodes’ compute modules in the chassis’
compute bays. We also see the terra cotta colored release lever on each compute
module, secured by a thumb screw.As shown, compute module bay 1 and 2 make
up one node pair and bay 3 and 4 make up the other node pair. In the event of a
compute module power supply failure, the power supply from the peer compute
module in the node pair will temporarily provide power to both nodes.
Let’s move the upper right of a compute module. The top light is a blue, power LED
and below that is an amber, fault LED. Each compute module has a ‘DO NOT
REMOVE’ indicator light which is shaped like a raised hand with a line through it.
To service the compute module in question, shut down the affected node and wait
until the ‘DO NOT REMOVE’ light goes out. Then it is safe to remove and service
the unit in question.
PowerScale Hardware Concepts-SSP
Page 56
© Copyright 2020 Dell Inc.
Introduction to PowerScale Nodes
The uHDMI port is used for factory debugging. The PCIE card on the right is for
external network connectivity and the left PCIE card is for internal network
connectivity. The compute module has a 1GbE management port, and the DB9
serial console port.
Each compute module has either a 1100W dual-voltage (low and medium
compute) or a 1450W high-line (240V) only (high and ultra compute) power supply
unit. If high-line only nodes are being installed in a low-line (120V) only
environment, two 1U rack-mountable step-up transformers are required for each
Gen 6 chassis. Always keep in mind that Gen 6 nodes do not have power buttons both compute modules in a node pair will power on immediately when one is
connected to a live power source. There are also status indicator lights such as the
PSU fault light.
All nodes have an ASHRAE (American Society of Heating, Refrigerating and Airconditioning Engineers) designation of A3, which enables the nodes to operate in
environments with ambient temperatures from 5 up to 40 degrees Celsius for
limited periods of time.
In closing, here are also 2 SSD bays on each compute module, one or both of
which are populated with SSDs (depending on node configuration) that are used as
L3 cache.
This concludes the tour of the PowerScale Gen 6 front panel display, drive sleds,
and an outside look at the node’s compute modules.
Inside Gen 6 Node
This hardware tour will take a deeper look inside the node’s compute module.
Movie:
The web version of this content contains a movie.
Audio Script for Video
This demonstration takes a tour of the inside of the Gen 6 compute module.
First, let’s take at the back of the chassis. The chassis can have two or four
compute modules. Remember that a node is a ¼ of the chassis and consists of a
compute module and five drive sleds. Each node pairs with a peer node to form a
node pair. Shown here, nodes three and four form a node pair.
PowerScale Hardware Concepts-SSP
© Copyright 2020 Dell Inc.
Page 57
Introduction to PowerScale Nodes
Let’s start by removing the node’s compute module to get a look inside. This
demonstration does not use a powered system. This tour does not highlight the
steps for removing components. Remember to always follow the proper removal
and install procedures from the SolVe Desktop.
WARNING: Only qualified Dell EMC personnel are allowed to open compute
nodes.
Let’s remove the node’s lid. This can be a bit tricky on the first time. Pull the blue
release handle without pressing down on the lid. Pressing down on the lid while
trying to open will keep the node lid from popping up. The lid portion of the compute
module holds the motherboard, CPU and RAM.
There are two different motherboard designs to accommodate different CPU types;
the performance based Broadwell-EP or the cost optimized Broadwell-DE. Shown
here is the Broadwell-DE based board that the H400, A200, and A2000 use. Note
the position of the four DIMMs and their slot numbering. Here is the Broadwell-EP
based board that the F800, H600 and H500 use. Note the position of the four
DIMMs and their slot numbering. The DIMMs are field replaceable units. The CPU
is not. Due to the density and positioning of motherboard components around the
DIMM slots, damage to the motherboard is possible if care is not taken while
removing and installing DIMM modules.
Let’s turn to the lower portion of the compute module. First we see the fan module.
This is a replaceable unit. Shown is the release lever for the fans.
The riser card, on the right side of the compute module, contains the PCIE card
slots, the NVRAM vault battery, and the M.2 card containing the NVRAM vault.
Let’s remove this to get a closer look. Removing the riser card can be tricky the first
time. Note the two blue tabs for removing the HBA riser, a sliding tab at the back
and a fixed tab at the front. At the same time, push the sliding tab in the direction of
the arrow on the tab and free the front end by pulling the riser away from the
locking pin on the side of the chassis with the fixed tab. Lift the tabs to unseat the
riser and pull it straight up. Try this at least once before going onsite to replace a
component.
Here are the two PCIe slots and the ‘Pelican’ slot. They are x4 or x8 depending on
the performance level of the node. The internal NIC for communication between
nodes is the PCI card shown on the left, the external PCI card is on the right. The
external NIC is used for client and application access. Depending on the
PowerScale Hardware Concepts-SSP
Page 58
© Copyright 2020 Dell Inc.
Introduction to PowerScale Nodes
performance level of the node, the external NIC may either be a full-size PCIe card
facing left, or a ‘Pelican’ card connected to the smaller proprietary slot between the
two PCIe slots, and facing right.
Next is the battery. The backup battery maintains power to the compute node while
journal data is being stored in the M.2 vault during an unexpected power loss
event. Note that because the riser card and the battery are paired, if the battery
needs to be replaced, it is replaced together with the riser card. Lastly, as seen
here, the M.2 vault disk is located under the battery. The M.2 vault disk is also a
field replaceable unit.
This concludes the inside tour. Remember to review the documentation on the
SolVe Desktop for proper removal and replacement of the node’s compute module
components.
PowerScale Hardware Concepts-SSP
© Copyright 2020 Dell Inc.
Page 59
Internal and External Networking
Internal and External Networking
PowerScale Hardware Concepts-SSP
Page 60
© Copyright 2020 Dell Inc.
Internal and External Networking
Internal and External Networking
PowerScale Hardware Concepts-SSP
© Copyright 2020 Dell Inc.
Page 61
Internal and External Networking
Module Objectives
After completing this lesson, you will be able to:
•
Explain the significance of internal and external networks in clusters.
•
Describe InfiniBand switches and cables, and identify Ethernet cabling.
PowerScale Hardware Concepts-SSP
Page 62
© Copyright 2020 Dell Inc.
Internal and External Networking
PowerScale Networking Architecture
OneFS supports standard network communication protocols IPv4 and IPv6.
PowerScale nodes include several external Ethernet connection options, providing
flexibility for a wide variety of network configurations14.
Network: There are two types of networks that are associated with a cluster:
internal and external.
Front-end, External Network
Client/Application
Layer
PowerScale Storage
Layer
Ethernet
Protocols: NFS, SMB, S3,
HTTP, FTP, HDFS, SWIFT
Ethernet
Layer
Backend communication
(PowerScale internal)
F200 cluster showing supported frontend protocols.
Clients connect to the cluster using Ethernet connections15 that are available on all
nodes.
14
In general, keeping the network configuration simple provides the best results
with the lowest amount of administrative overhead. OneFS offers network
provisioning rules to automate the configuration of additional nodes as clusters
grow.
15
Because each node provides its own Ethernet ports, the amount of network
bandwidth available to the cluster scales linearly.
PowerScale Hardware Concepts-SSP
© Copyright 2020 Dell Inc.
Page 63
Internal and External Networking
The complete cluster is combined with hardware, software, networks in the
following view:
Back-end, Internal Network
OneFS supports a single cluster16 on the internal network. This back-end network,
which is configured with redundant switches for high availability, acts as the
backplane for the cluster.17
16
All intra-node communication in a cluster is performed across a dedicated
backend network, comprising either 10 or 40 GbE Ethernet, or low-latency QDR
InfiniBand (IB).
17
This enables each node to act as a contributor in the cluster and isolating nodeto-node communication to a private, high-speed, low-latency network. This backend network utilizes Internet Protocol (IP) for node-to-node communication.
PowerScale Hardware Concepts-SSP
Page 64
© Copyright 2020 Dell Inc.
Internal and External Networking
Legacy Connectivity
Three types of InfiniBand cable are used with currently deployed clusters. Gen5
nodes and newer InfiniBand switches have QDR InfiniBand ports, which take
QSFP connectors. Older nodes and switches, which run at DDR or SDR speeds
use the legacy CX4 connector. In mixed environments (QDR nodes and DDR
switch, or conversely) a hybrid IB cable is used. This cable has a CX4 connector on
one end and a QSFP connector on the other. However, QDR nodes are
incompatible with SDR switches. On each cable, the connector types identify the
cables. The graphic shows, the combination of the type of node and the type of
InfiniBand switch port determines the correct cable type.
PowerScale Hardware Concepts-SSP
© Copyright 2020 Dell Inc.
Page 65
Internal and External Networking
F200 and F600 Network Connectivity
The graphic shows a closer look at the external and internal connectivity. Slot 1 is
used for backend communication on both the F200 and F600. Slot 3 is used for the
F600 2x 25 GbE or 2x 100 GbE frontend network connections. The rack network
daughter card (rNDC) is used for the F200 2x 25 GbE frontend network
connections.
The F200 and F600 have no dedicated management port.
PCIe slot 1 - used for all BE
communication
PCIe slot 3 - used for F600 FE
rNDC used for F200 FE
Note: The graphic shows the R640 and does not represent the F200 and F600 PCIe and rNDC
configuration.
Tip: Interfaces are named "25gige-N" or "100gige-N." Interface
names may not indicate the link speed. For example, the interface
name for NICs that are running at the lower speed such as 10 Gb do
not change to "10gige-1." You can use ifconfig to check the link
speed.
PowerScale Hardware Concepts-SSP
Page 66
© Copyright 2020 Dell Inc.
Internal and External Networking
PowerScale Architecture - External Network
8 node Gen 6 cluster showing supported protocols.
The external network provides connectivity for clients over standard file-based
protocols. It supports link aggregation, and network scalability is provided through
software in OneFS. A Gen 6 node has to 2 front-end ports - 10 GigE, 25 GigE, or
40 GigE, and one 1 GigE port for management. Gen 6.5 nodes have 2 front-end
ports - 10 GigE, 25 GigE, or 100 GigE. In the event of a Network Interface
Controller (NIC) or connection failure, clients do not lose their connection to the
cluster. For stateful protocols, such as SMB and NFSv4, this prevents client-side
timeouts and unintended reconnection to another node in the cluster. Instead,
clients maintain their connection to the logical interface and continue operating
normally. Support for Continuous Availability (CA) for stateful protocols like SMB
and NFSv4 is supported.
PowerScale Hardware Concepts-SSP
© Copyright 2020 Dell Inc.
Page 67
Internal and External Networking
Breakout Cables
The 40 GbE and 100 GbE connections are 4
individual lines of 10 GbE and 25 GbE. Most
switches support breaking out a QSFP port into four
SFP ports using a 1:4 breakout cable. The backend
is done automatically when the switch detects the
cable type as a breakout cable. The frontend is often
configured manually on a per port basis.
Backend breakout cables
PowerScale Hardware Concepts-SSP
Page 68
© Copyright 2020 Dell Inc.
Internal and External Networking
Cabling Considerations
Listed here are some general
cabling considerations.
•
On Gen 5 platforms, verify that
each node has two separate
power cables that are plugged
into two separate power sources.
•
On a Gen 6 chassis, ensure that
each member of a node pair is
connected to a different power
source18.
•
Before creating the cluster, do a
quick cable inspection.
•
The front-end, client-facing,
network connections be evenly distributed across patch panels in the server
room. Distributing the connections may avoid single points of failure.
•
Use care when handling and looping copper InfiniBand cables, and any type of
optical network cables. Bending or mishandling cables can result in damaged
and unusable cables.
•
Do not coil the cables less than 10 inches in diameter to prevent damage.
Never bend cables beyond their recommended bend radius.
18
The use of Y cables is not recommended because power supply of node is no
longer redundant if all power is supplied by the same cable. Verify that all cables
are firmly seated and that wire bales firmly in place to keep the power cables
seated.
PowerScale Hardware Concepts-SSP
© Copyright 2020 Dell Inc.
Page 69
Cluster Management Tools
Cluster Management Tools
PowerScale Hardware Concepts-SSP
Page 70
© Copyright 2020 Dell Inc.
Cluster Management Tools
Cluster Management Tools
PowerScale Hardware Concepts-SSP
© Copyright 2020 Dell Inc.
Page 71
Cluster Management Tools
Module Objectives
After completing this lesson, you will be able to:
•
Identify tools used to manage PowerScale.
PowerScale Hardware Concepts-SSP
Page 72
© Copyright 2020 Dell Inc.
Cluster Management Tools
OneFS Management Tools
The OneFS management interface is used to perform various administrative and
management tasks on the PowerScale cluster and nodes. Management capabilities
vary based on which interface is used. The different types of management
interfaces in OneFS are:
•
Serial Console
•
Web Administration Interface (WebUI)
•
Command Line Interface (CLI).
•
Platform Application Programming Interface (PAPI)
•
Front Panel Display
PowerScale Hardware Concepts-SSP
© Copyright 2020 Dell Inc.
Page 73
Cluster Management Tools
Serial Console Video
Movie:
The web version of this content contains a movie.
Link:
https://edutube.emc.com/Player.aspx?vno=Xu/3IyDNSxbuNMOcLHrqBg==&autopl
ay=true
Four options are available for managing the cluster. The web administration
interface (WebUI), the command-line interface (CLI), the serial console, or the
platform application programming interface (PAPI), also called the OneFS API. The
first management interface that you may use is a serial console to node 1. A serial
connection using a terminal emulator, such as PuTTY, is used to initially configure
the cluster. The serial console gives you serial access when you cannot or do not
want to use the network. Other reasons for accessing using a serial connection
may be for troubleshooting, site rules, a network outage, and so on. Shown are the
terminal emulator settings.
The configuration Wizard automatically starts when a node is first powered on or
reformatted. If the Wizard starts, the menu and prompt are displayed as shown.
Choosing option 1 steps you through the process of creating a cluster. Option 2 will
exit the Wizard after the node finishes joining the cluster. After completing the
configuration Wizard, running the isi config command enables you to change
the configuration settings.
PowerScale Hardware Concepts-SSP
Page 74
© Copyright 2020 Dell Inc.
Cluster Management Tools
Configuration Manager
To initially configure a PowerScale cluster, the CLI must be accessed by
establishing a serial connection to the node designated as node 1. The serial
console gives you serial access when you cannot or do not want to use the
network. Other reasons for accessing using a serial connection may be for
troubleshooting, site rules, a network outage, so on.
Serial Port19.
Configure the terminal emulator utility to use the following settings:
19
The serial port is usually a male DB9 connector. This port is called the service
port. Connect a serial null modem cable between a serial port of a local computer,
such as a laptop, and the service port on the node designated as node 1. As most
laptops today no longer have serial ports, you might need to use a USB-to-serial
converter. On the local computer, launch a serial terminal emulator, such as
PuTTY.
PowerScale Hardware Concepts-SSP
© Copyright 2020 Dell Inc.
Page 75
Cluster Management Tools
•
Transfer rate = 115,200 bps
•
Data bits = 8
•
Parity = none
•
Stop bits = 1
•
Flow control = hardware
More Information on Command Prompt20.
20
Either a command prompt or a Configuration Wizard prompt appears. The
command prompt displays the cluster name, a dash (-), a node number, and either
a hash (#) symbol or a percent (%) sign. If you log in as the root user, it will be a #
symbol. If you log in as another user, it will be a % symbol. For example, Cluster-1#
or Cluster-1%. This prompt is the typical prompt that is found on most UNIX and
Linux systems. When a node first powers on or reformats, the Configuration Wizard
automatically starts. If the Configuration Wizard starts, the prompt displays is
shown. There are four options: Create a new cluster, Join an existing cluster, Exit
wizard and configure manually, and Reboot into SmartLock Compliance mode.
Choosing option 1 creates a new cluster, while option 2 joins the node to an
existing cluster. If you choose option 1, the Configuration Wizard steps you through
the process of creating a new cluster. If you choose option 2, the Configuration
Wizard ends after the node finishes joining the cluster. You can then configure the
cluster using the WebUI or the CLI.
PowerScale Hardware Concepts-SSP
Page 76
© Copyright 2020 Dell Inc.
Cluster Management Tools
isi config
Edit Wizard settings
Common commands - shutdown, status, name
Changes prompt
to >>>
Other "isi" commands not available in configuration console
The isi config command, pronounced "izzy config," opens the configuration
console. The console contains configured settings from the time the Wizard started
running.
Use the console to change initial configuration settings. When in the isi config
console, other configuration commands are unavailable. The exit command is
used to go back to the default CLI.
PowerScale Hardware Concepts-SSP
© Copyright 2020 Dell Inc.
Page 77
Cluster Management Tools
Web Administration Interface (WebUI)
OneFS version
User must have logon privileges
Connect to any
node in cluster
over HTTPS on
port 8080
Multiple browser support
The WebUI is a graphical interface that is used to manage the cluster.
The WebUI requires at least one IP address that is configured21 on one of the
external Ethernet ports presents in one of the nodes.
21
Either a command prompt or a Configuration Wizard prompt appears. The
command prompt displays the cluster name, a dash (-), a node number, and either
a hash (#) symbol or a percent (%) sign. If you log in as the root user, it will be a #
symbol. If you log in as another user, it will be a % symbol. For example, Cluster-1#
or Cluster-1%. This prompt is the typical prompt that is found on most UNIX and
Linux systems. When a node first powers on or reformats, the Configuration Wizard
automatically starts. If the Configuration Wizard starts, the prompt display is shown.
There are four options: Create a new cluster, Join an existing cluster, Exit wizard
and configure manually, and Reboot into SmartLock Compliance mode. Choosing
PowerScale Hardware Concepts-SSP
Page 78
© Copyright 2020 Dell Inc.
Cluster Management Tools
Example browser URLs:
•
https://192.168.3.11:8080
•
https://engineering.dees.lab:8080
To access the web administration interface from another computer, an Internet
browser is used to connect to port 8080. The user must login using the root
account, admin account, or an account with log on privileges. After opening the
web administration interface, there is a four-hour login timeout. In OneFS 8.2.0 and
later, the WebUI uses the HTML5 doctype, meaning it is HTML5 compliant in the
strictest sense, but does not use any HTML specific features. Previous versions of
OneFS require Flash.
option 1 creates a new cluster, while option 2 joins the node to an existing cluster.
If you choose option 1, the Configuration Wizard steps you through the process of
creating a new cluster. If you choose option 2, the Configuration Wizard ends after
the node finishes joining the cluster. You can then configure the cluster using the
WebUI or the CLI.
PowerScale Hardware Concepts-SSP
© Copyright 2020 Dell Inc.
Page 79
Cluster Management Tools
Command Line Interface (CLI)
The CLI can be accessed in two ways:
•
Out-of-band22
•
In-band23
Both methods are done using any SSH client such as OpenSSH or PuTTY. Access
to the interface changes based on the assigned privileges.
OneFS commands are code that is built on top of the UNIX environment and are
specific to OneFS management. You can use commands together in compound
command structures combining UNIX commands with customer facing and internal
commands.
4
1
5
2
3
6
22
Accessed using a serial cable that is connected to the serial port on the back of
each node. As many laptops no longer have a serial port, a USB-serial port adapter
may be needed.
23
Accessed using external IP address that is configured for the cluster.
PowerScale Hardware Concepts-SSP
Page 80
© Copyright 2020 Dell Inc.
Cluster Management Tools
1: The default shell is zsh.
2: OneFS is built upon FreeBSD, enabling use of UNIX-based commands, such as
cat, ls, and chmod. Every node runs OneFS, including the many FreeBSD kernel
and system utilities.
3: Connections make use of Ethernet addresses.
4: OneFS supports management isi commands. Not all administrative
functionalities are available using the CLI.
5: The CLI command use includes the capability to customize the base command
with the use of options, also known as switches and flags. A single command with
multiple options result in many different permutations, and each combination
results in different actions performed.
6: The CLI is a scriptable interface. The UNIX shell enables scripting and execution
of many UNIX and OneFS commands.
Caution: Follow guidelines and procedures to appropriately
implement the scripts to not interfere with regular cluster operations.
Improper use of a command or using the wrong command can be
potentially dangerous to the cluster, the node, or to customer data.
PowerScale Hardware Concepts-SSP
© Copyright 2020 Dell Inc.
Page 81
Cluster Management Tools
CLI Usage
Can use common UNIX tools
"help" shows needed privileges
Shows syntax and usage
Option explanation
The man isi or isi --help command is an important command for a new
administrator. These commands provide an explanation of the available isi
commands and command options. You can also view a basic description of any
command and its available options by typing the -h option after the command.
PowerScale Hardware Concepts-SSP
Page 82
© Copyright 2020 Dell Inc.
Cluster Management Tools
Front Panel Display
Front Panel Display of a Gen 6 chassis.
The Gen 6 front panel display is an LCD screen with five buttons that are used for
basic administration tasks24.
The Gen 6.5 front panel has limited functionality25 compared to the Gen 6.
24
Some of them include: adding the node to a cluster, checking node or drive
status, events, cluster details, capacity, IP and MAC addresses.
25
You can join a node to a cluster and the panel display node name after the node
has joined the cluster.
PowerScale Hardware Concepts-SSP
© Copyright 2020 Dell Inc.
Page 83
Course Summary
Course Summary
PowerScale Hardware Concepts-SSP
Page 84
© Copyright 2020 Dell Inc.
Course Summary
Course Summary
Now that you have completed this course, you can:
→
→
→
→
→
Discuss installation engagement actions.
Analyze the PowerScale Configuration Guide.
Describe PowerScale nodes.
Identify internal and external networking components.
Explain the cluster management tools.
PowerScale Hardware Concepts-SSP
Internal Use - Confidential
© Copyright
2020 Dell Inc.
Page 85
Appendix
PowerScale Hardware Concepts-SSP
Internal Use - Confidential
© Copyright
2020 Dell Inc.
Page 87
Appendix
Links to download the WinSCP and PuTTy software
Below are hyperlinks to useful information.
•
Dell EMC PROVEN PROFESSIONAL COMMUNITY (URL)
•
Dell EMC PROVEN PROFESSIONAL CERTIFICATION (URL)
•
STUDENT DISCUSSIONS COMMUNITY (URL)
•
Dell EMC EDUCATION SERVICES (URL)
•
CONTACT US (URL)
•
PuTTy client
•
WinSCP
PowerScale Hardware Concepts-SSP
Page 88
© Copyright 2020 Dell Inc.
Appendix
Electrostatic Discharge
Electrostatic Discharge is a major cause of damage to electronic components and
potentially dangerous to the installer. To avoid ESD damage, review ESD
procedures before arriving at the customer site and adhere to the precautions when
onsite.
Clean Work Area: Clear
work area of items that
naturally build up
electrostatic discharge
Antistatic Packaging:
Leave components in
antistatic packaging until
time to install.
PowerScale Hardware Concepts-SSP
Internal Use - Confidential
© Copyright
2020 Dell Inc.
Page 89
Appendix
No ESD Kit Available:
•
Before touching
component, put one
hand firmly on bare
metal surface.
•
After removing
component from
antistatic bag, do
NOT move around
room or touch
furnishings,
personnel, or
surfaces.
•
If you must move
around or touch
something, first put
component back in
antistatic bag
ESD Kit: Always use
ESD kit when handling
components.
Don't Move: Minimize
movement to avoid
buildup of electrostatic
discharge.
PowerScale Hardware Concepts-SSP
Page 90
© Copyright 2020 Dell Inc.
Appendix
Gen 5 Highlights
Gen 5 and Gen 6 nodes can exist within the same cluster. Having both types of
node in one cluster is the typical path for a hardware refresh as customers
incorporate and scale with Gen 6 nodes. Currently, a cluster can have up to 144
nodes, regardless of node type and mix. You can add Gen 5 nodes to the cluster
one at a time provided the cluster has a minimum of three nodes26 of the same
series.
OneFS unites the entire cluster in a globally coherent pool of memory, CPU, and
capacity. OneFS automatically distributes file data across the nodes for built-in high
availability. So when a file request is received by an available node, it requests the
pieces of the file from the nodes over the back end network, assembles the file, and
delivers it to the requesting client. Therefore requests are not processed through
one controller node, but rather the node which is most accessible based on
availability.
26
For example, you cannot use the capacity of a single X410 node if adding it to a
cluster consisting of only S210 nodes. You would need to add three, X410s in this
example. For Gen 6, nodes are added to the cluster in node pairs as shown. A
node pair is the minimum incremental node growth.
PowerScale Hardware Concepts-SSP
Internal Use - Confidential
© Copyright
2020 Dell Inc.
Page 91
Appendix
PowerScale Nodes
Individual PowerScale nodes provide the data storage capacity and processing
power of the PowerScale scale-out NAS platform. All of the nodes are peers to
each other and so there is no single 'master' node and no single 'administrative
node'.
•
No single master
•
No single point of administration
Administration can be done from any node in the cluster as each node provides
network connectivity, storage, memory, non-volatile RAM (NVDIMM) and
processing power found in the Central Processing Units (CPUs). There are also
different node configurations, compute, and capacity. These varied configurations
can be mixed and matched to meet specific business needs.
Each contains.
•
Disks
•
Processor
•
Cache
•
Front-end network connectivity
PowerScale Hardware Concepts-SSP
Page 92
© Copyright 2020 Dell Inc.
Appendix
Tip: Gen 5 and Gen 6 nodes can exist within the same cluster. Every
PowerScale node is equal to every other PowerScale node of the
same type in a cluster. No one specific node is a controller or filer.
PowerScale Hardware Concepts-SSP
Internal Use - Confidential
© Copyright
2020 Dell Inc.
Page 93
Appendix
F-Series
The F-series nodes sit at the top of both performance and capacity with all-flash
arrays for ultra compute and high capacity. The all flash platforms can accomplish
250-300k protocol operations per chassis, and get 15 GB/s aggregate read
throughput from the chassis. Even when the cluster scales, the latency remains
predictable.
•
F80027
•
F81028
27
The F800 is suitable for workflows that require extreme performance and
efficiency. It is an all-flash array with ultra-high performance. The F800 sits at the
top of both the performance and capacity platform offerings when implementing the
15.4TB model, giving it the distinction of being both the fastest and densest Gen 6
node.
28
The F810 is suitable for workflows that require extreme performance and
efficiency. The F810 also provides high-speed inline data deduplication and in-line
data compression. It delivers up to 3:1 efficiency, depending on your specific
dataset and workload.
PowerScale Hardware Concepts-SSP
Page 94
© Copyright 2020 Dell Inc.
Appendix
H-Series
After F-series nodes, next in terms of computing power are the H-series nodes.
These are hybrid storage platforms that are highly flexible and strike a balance
between large capacity and high-performance storage to provide support for a
broad range of enterprise file workloads.
•
H40029
•
H50030
•
H560031
•
H60032
29
The H400 provides a balance of performance, capacity and value to support a
wide range of file workloads. It delivers up to 3 GB/s bandwidth per chassis and
provides capacity options ranging from 120 TB to 720 TB per chassis. The H400
uses a medium compute performance node with SATA drives.
30
The H500 is a versatile hybrid platform that delivers up to 5 GB/s bandwidth per
chassis with a capacity ranging from 120 TB to 720 TB per chassis. It is an ideal
choice for organizations looking to consolidate and support a broad range of file
workloads on a single platform. H500 is comparable to a top of the line X410,
combining a high compute performance node with SATA drives. The whole Gen 6
architecture is inherently modular and flexible with respect to its specifications.
The H5600 combines massive scalability – 960 TB per chassis and up to 8 GB/s
bandwidth in an efficient, highly dense, deep 4U chassis. The H5600 delivers inline
data compression and deduplication. It is designed to support a wide range of
demanding, large-scale file applications and workloads.
31
32
The H600 is Designed to provide high performance at value, delivers up to
120,000 IOPS and up to 12 GB/s bandwidth per chassis. It is ideal for high
PowerScale Hardware Concepts-SSP
Internal Use - Confidential
© Copyright
2020 Dell Inc.
Page 95
Appendix
performance computing (HPC) workloads that don’t require the extreme
performance of all-flash. These are spinning media nodes with various levels of
available computing power - H600 combines our turbo compute performance nodes
with 2.5" SAS drives for high IOPS workloads.
PowerScale Hardware Concepts-SSP
Page 96
© Copyright 2020 Dell Inc.
Appendix
A-Series
The A-series nodes namely have lesser compute power compared to other nodes
and are designed for data archival purposes. The archive platforms can be
combined with new or existing all-flash and hybrid storage systems into a single
cluster that provides an efficient tiered storage solution.
•
A20033
•
A200034
33
The A200 is an ideal active archive storage solution that combines near-primary
accessibility, value and ease of use.
34
The A2000 is an ideal solution for high density, deep archive storage that
safeguards data efficiently for long-term retention. The A2000 is capable of
containing 80, 10TB drives for 800TBs of storage by using a deeper chassis with
longer drive sleds containing more drives in each sled.
PowerScale Hardware Concepts-SSP
Internal Use - Confidential
© Copyright
2020 Dell Inc.
Page 97
Glossary
Front Panel Display
The Front Panel Display is located on the physical node or chassis. It is used to
perform basic administrative tasks onsite.
OneFS CLI
The command-line interface runs "isi" commands to configure, monitor, and
manage the cluster. Access to the command-line interface is through a secure shell
(SSH) connection to any node in the cluster.
PAPI
The PAPI is divided into two functional areas: one area enables cluster
configuration, management, and monitoring functionality, and the other area
enables operations on files and directories on the cluster.
Serial Console
The serial console is used for initial cluster configurations by establishing serial
access to the node designated as node 1.
SolVe Desktop
The SolVe Desktop is a comprehensive tool that has replaced the individual
product Procedure Generator. Based on cumulative knowledge and current best
practices, the output provides clear guidance and process steps. This helps to
enable Dell EMC employees, partners and customers to implement, update and
repair products and solutions in a proactive and consistent manner.
WebUI
The browser-based OneFS web administration interface provides secure access
with OneFS-supported browsers. This interface is used to view robust graphical
monitoring displays and to perform cluster-management tasks.
PowerScale Hardware Concepts-SSP
Internal Use - Confidential
© Copyright
2020 Dell Inc.
Page 99
PowerScale Hardware Concepts-SSP
© Copyright 2020 Dell Inc.
Page 100
Download