MURAL Installation Guide for Starter Pack

advertisement
MURAL Installation Guide for
Starter Pack
Version 3.3
Last Updated May 9, 2014
Americas Headquarters
Cisco Systems, Inc.
170 West Tasman Drive
San Jose, CA 95134-1706
USA
http://www.cisco.com
Tel: 408 526-4000
800 553-NETS (6387)
Fax: 408 527-0883
MURAL Installation Guide for Starter Pack
THE SPECIFICATIONS AND INFORMATION REGARDING THE PRODUCTS IN THIS MANUAL ARE SUBJECT TO CHANGE
WITHOUT NOTICE. ALL STATEMENTS, INFORMATION, AND RECOMMENDATIONS IN THIS MANUAL ARE BELIEVED TO
BE ACCURATE BUT ARE PRESENTED WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED. USERS MUST TAKE
FULL RESPONSIBILITY FOR THEIR APPLICATION OF ANY PRODUCTS.
THE SOFTWARE LICENSE AND LIMITED WARRANTY FOR THE ACCOMPANYING PRODUCT ARE SET FORTH IN THE
INFORMATION PACKET THAT SHIPPED WITH THE PRODUCT AND ARE INCORPORATED HEREIN BY THIS
REFERENCE. IF YOU ARE UNABLE TO LOCATE THE SOFTWARE LICENSE OR LIMITED WARRANTY, CONTACT YOUR
CISCO REPRESENTATIVE FOR A COPY.
The Cisco implementation of TCP header compression is an adaptation of a program developed by the
University of California, Berkeley (UCB) as part of UCB’s public domain version of the UNIX operating system.
All rights reserved. Copyright © 1981, Regents of the University of California.
NOTWITHSTANDING ANY OTHER WARRANTY HEREIN, ALL DOCUMENT FILES AND SOFTWARE OF THESE SUPPLIERS
ARE PROVIDED “AS IS” WITH ALL FAULTS. CISCO AND THE ABOVE-NAMED SUPPLIERS DISCLAIM ALL WARRANTIES,
EXPRESSED OR IMPLIED, INCLUDING, WITHOUT LIMITATION, THOSE OF MERCHANTABILITY, FITNESS FOR A
PARTICULAR PURPOSE AND NONINFRINGEMENT OR ARISING FROM A COURSE OF DEALING, USAGE, OR TRADE
PRACTICE.
IN NO EVENT SHALL CISCO OR ITS SUPPLIERS BE LIABLE FOR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, OR
INCIDENTAL DAMAGES, INCLUDING, WITHOUT LIMITATION, LOST PROFITS OR LOSS OR DAMAGE TO DATA
ARISING OUT OF THE USE OR INABILITY TO USE THIS MANUAL, EVEN IF CISCO OR ITS SUPPLIERS HAVE BEEN
ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
Cisco and the Cisco Logo are trademarks of Cisco Systems, Inc. and/or its affiliates in the U.S. and other
countries. A listing of Cisco's trademarks can be found at www.cisco.com/go/trademarks. Third party
trademarks mentioned are the property of their respective owners. The use of the word partner does not imply
a partnership relationship between Cisco and any other company. (1005R)
Any Internet Protocol (IP) addresses and phone numbers used in this document are not intended to be actual
addresses and phone numbers. Any examples, command display output, network topology diagrams, and other
figures included in the document are shown for illustrative purposes only. Any use of actual IP addresses or
phone numbers in illustrative content is unintentional and coincidental.
MURAL Installation Guide for Starter Pack
Copyright © 2014, Cisco Systems, Inc. All rights reserved.
2
Copyright © 2014, Cisco Systems, Inc.
Table of Contents
Installation Overview
1
Prerequisites
1
System Components
2
Hardware
3
Installation Package
4
Customer Information Questionnaire
4
Configuring UCS Hardware and Software for MURAL
Verifying UCS Hardware Configuration
Configuring Base UCS Software Settings
Configuring UCS for MURAL
7
8
15
19
Configuring Direct Attachments to External SANs
20
Configuring System Profiles for the Cisco UCS Manager
36
Installing and Configuring a VM for GMS Lite
41
Before You Begin
41
Installing the Vdisk Image and VM Software
41
Creating and Configuring the VM for GMS Lite
42
Defining the Deployment Configuration File Using GMS Lite
53
Initializing the GMS Lite User Interface
54
Verifying System Settings
56
Verifying MAC Addresses of Blades
57
Verifying World Wide Identifiers for LUNs
58
Verifying Node and Node Cluster Configuration
60
Assigning Application Profiles to Nodes
63
i
MURAL Installation Guide for Starter Pack
Validating the Configuration
79
Manufacturing the Master GCU Blade Using GMS Lite
Prerequisites
81
Manufacturing the Master GCU Blade
81
Verifying Zoning and FLOGI on the Fabric Interconnects
86
Allocating Storage for the Master GCU Blade
89
EMC Hardware Installation Prerequisites
89
Configuring IP Addresses for the EMC System
90
Registering the Master GCU Blade with EMC
91
Creating RAID Groups and the LUN for the Master GMS Node
94
Creating the Storage Group for the Master GCU Blade
100
Adjusting Caching on the EMC
102
Configuring and Starting the Master GCU Node
106
Manufacturing the Remaining Blades
115
Allocating Storage for the Remaining Blades
119
Registering the Remaining Blades with EMC
119
Creating LUNs for the Remaining Nodes
123
Creating Storage Groups for the Remaining Blades
127
Adjusting Caching on the EMC
130
Verifying WWIDs for All LUNs
133
Configuring the Standby GCU Node
139
Configuring the Password and License, and Applying Patches
139
Completing the Installation on the Standby GCU Node
142
Installing MURAL on the Remaining Blades
ii
81
143
Copyright © 2014, Cisco Systems, Inc.
MURAL Installation Guide for Starter Pack
Installing MURAL Software on the Remaining Blades
143
Verifying the Installation is Complete
144
Tuning the System for Performance
146
Verifying that Processes are Running
149
Generating and Pushing the Information Bases
151
Configuring IBs for EDR
152
Configuring Gateways For All IBs
154
Synchronize the EDR and BulkStats IBs on the Standby GCU Node
162
Fetching BulkStats IBs
164
Create a File for Uncategorized URL, UA, and TAC Reports
165
Generating and Installing Security Certificates
167
Backing Up and Generating the Keystore Files
167
Downloading the Signed Certificate
169
Downloading the CA Certificate
169
Installing the Signed Certificate in the Keystore
170
Initializing and Verifying Data Feeds from ASRs
173
Creating a MURAL User Account for ASRs
173
Initializing the Data Feed on ASRs
174
Verifying Data Storage in HDFS
174
Setting the Data Start Time
175
Starting the MURAL Data Processing Jobs
176
Verifying Data Processing on the Compute Blades
176
Verifying Data on the Insta Blades
177
Starting UI Processes and Verifying Data
179
iii
MURAL Installation Guide for Starter Pack
Mandatory Parameters for Incoming ASR Files
Mandatory Attributes for Flow EDRs for MURAL
183
Mandatory HTTP EDR Attributes for MURAL
184
ASR-Side Configuration
185
Manually Manufacturing the Master GCU Blade
iv
183
187
Copyright © 2014, Cisco Systems, Inc.
Installation Overview
This document describes how to install the Mobility Unified Reporting and
Analytics (MURAL) application. MURAL provides Web-based reporting and
analytics for deep packet inspection (DPI) data emerging from your network.
Prerequisites
This document assumes that you have a working knowledge of the following
technologies:
l
Linux operating system
l
Cisco Unified Computing System (UCS) Software, Release 2.1
l
Cisco UCS Server Blade administration
l
EMC storage devices
Before you begin the installation, we recommend that you:
l
Review the Release Notes for MURAL 3.3 (or later)
l
Complete a training course on MURAL
l
Locate the Customer Information Questionnaire (CIQ) for the deployment.
See "Customer Information Questionnaire" on page 4.
1
MURAL Installation Guide for Starter Pack
System Components
The following figure shows the components of the MURAL platform, focusing on
how the data flows through the system:
The figure depicts each type of node as a separate logical entity. In larger
deployments there is often a separate blade for each type of node, whereas in
smaller deployments a blade might host multiple types of nodes. In the Starter
Pack deployment, for example, the GMS, Collector, and UI/Caching nodes run on
the same blade, referred to as the GCU blade.
The MURAL platform nodes perform the following functions:
l
Collector node—Collects data streams pushed to the MURAL platform,
interprets the exported flows, enriches them with static data, and
assembles data sets. The Collector stores the raw data in the Hadoop
Distributed File System (HDFS) and sends it to the Compute node. The
Collector node cluster can have any number of servers, in pairs for master
and standby and uses 1+1 redundancy (transparent failover between pairs
of active-active nodes).
2
Copyright © 2014, Cisco Systems, Inc.
MURAL Installation Guide for Starter Pack
l
Compute node—Analyzes and aggregates the data, creating data cubes.
The term data cube is a convenient shorthand for the data structure, which
is actually multi-dimensional and not limited to the three dimensions of a
cube. The Compute node cluster can have any number of servers,
depending on the deployment, and uses N+1 redundancy.
l
Insta node—Stores and manages the processed data cubes that are stored
in the columnar Insta database. Data cubes are commonly retained for the
previous three to six years, depending on the amount of storage. The Insta
node cluster has two servers with 1+1 redundancy.
l
Caching and UI nodes—Provide processed data to MURAL graphic user
interfaces (GUIs) and third-party applications. In MURAL, the two nodes
always run on the same blade, and so are referred to collectively as the
UI/Caching (or Rubix) node. The UI/Caching node uses 1+1 redundancy in
active-active mode.
The Caching node manages the Rubix engine and Rubix data cache. The
Rubix engine continually fetches newly processed data from the Insta nodes
as soon as it is available, and caches it in the Rubix data cache for quicker
delivery to the UI node.
The UI node accepts requests from GUIs and third-party applications, and
requests the relevant data cubes from the Caching node.
l
General Management System (GMS) node—Provides centralized
management of the other MURAL nodes, such as remote manufacturing of
blades (installing the MURAL software), patch management, monitoring of
all nodes and operations, and importing and running node configurations.
(It does not appear in the preceding figure because it does not operate on
the flow of network data and processed data cubes.)
Hardware
The MURAL application is hosted on one UCS 5108 Blade Server Chassis in a
single-chassis deployment, and two Chassis in a dual-chassis deployment. Each
3
MURAL Installation Guide for Starter Pack
chassis houses blades that host the nodes listed above (Collector, Compute,
Insta, UI/Caching, and GMS). Data storage is provided by EMC storage devices.
The data flows that feed the MURAL system are generated by an Cisco ASR 5000
or ASR 5500 platform (referred to in the remainder of this document simply as an
ASR).
Installation Package
The MURAL software installation package contains the following components:
l
An ISO image file. For the image name and associated MD5 checksum,
refer to the Release Notes.
l
The mural.xml file, which is installed by convention on the master GMS
blade and contains custom configuration settings for your deployment,
based on the network topology information in the CIQ.
l
Any software patches that apply to the release. A complete list appears in
the Release Notes.
l
Management information bases (MIBs)
Customer Information Questionnaire
The Customer Information Questionnaire (CIQ) is an Excel spreadsheet of
configuration settings based on a site survey that was completed before the
installation process. Each worksheet includes the indicated kind of information: l
Contacts—Site personnel and their responsibilities
l
Space_Power Req—Space and power requirements
l
IP Survey—Physical network connections, virtual LANs (VLANs),
interfaces, slot assignments, Simple Network Management Protocol (SNMP)
and Simple Mail Transfer Protocol (SMTP) settings, and so forth
4
l
Network Diagrams—Physical connections between system components
l
Connectivity—Details for ports and connections
l
Firewall —Firewall changes required for connectivity
Copyright © 2014, Cisco Systems, Inc.
MURAL Installation Guide for Starter Pack
l
Alarms—All supported SNMP traps
l
ASR5K—Locations for required ASR information bases (IBs)
Proceed to "Configuring UCS Hardware and Software for MURAL" on page 7.
5
Configuring UCS Hardware and Software for MURAL
The Mobility Unified Reporting and Analytics (MURAL) software components
(referred to as nodes) run on Cisco Unified Computing System (UCS) B200 M2
and M3 Blade Servers (blades) installed in Cisco UCS 5108 Blade Server Chassis
(chassis). Cisco UCS 6248UP 48-Port Fabric Interconnects provide the physical
connections between the chassis, the storage area network (SAN) hardware, and
the client network (also referred to as the management network).
MURAL has been tested and validated in the topology shown in the following
figure, which includes two chassis and two Fabric Interconnects. This document
refers to the validated topology, but a single-chassis Starter Pack deployment
uses only one chassis, and potentially only one Fabric Interconnect.
Perform the tasks described in the following sections:
l
"Verifying UCS Hardware Configuration" on the next page
l
"Configuring Base UCS Software Settings" on page 15
7
MURAL Installation Guide for Starter Pack
Verifying UCS Hardware Configuration
Verify UCS hardware configuration, including slot assignments for blades,
connections between Fabric Interconnects, SAN uplinks, and network uplinks.
Verifying Slot Assignments for Blades
The IP Survey worksheet in the Customer Information Questionnaire (CIQ)
specifies for your deployment which nodes run on the blades installed in slots on
the chassis. The sample slot assignments in the following figures and tables are
for illustrative purposes only; refer to your CIQ for the actual assignments.
Notes:
l
All blades are physically identical, except the GCU blades which have more
RAM than the others.
l
For the dual-chassis Starter Pack deployment, the sample slot assignments
provide for high availability by placing redundant nodes of each type on
different chassis (for example, the Insta 1 node is on Chassis 1 and the
Insta 2 node is on Chassis 2). Verify that your slot assignments follow
this pattern.
l
Slots are numbered 1 through 8 from left to right, top to bottom.
The following figure and table depict sample slot assignments for a single-chassis
Starter Pack deployment.
8
Copyright © 2014, Cisco Systems, Inc.
MURAL Installation Guide for Starter Pack
Slot
Node or Nodes
1
GMS 1, Collector 1, UI/Caching 1 (GCU 1)
2
Compute 1
3
Insta 1
4
Compute 2
5
GMS 2, Collector 2, UI/Caching 2 (GCU 2)
6
Compute 3
7
Compute 4
8
Insta 2
The following figure and table depict sample slot assignments for Chassis 1 in a
dual-chassis Starter Pack deployment.
Slot
Node or Nodes
1
GMS 1, Collector 1, UI/Caching 1 (GCU 1)
2
Compute 1
3
Compute 2
4
Insta 1
9
MURAL Installation Guide for Starter Pack
The following figure and table depict sample slot assignments for Chassis 2 in a
dual-chassis Starter Pack deployment.
Slot
Node or Nodes
1
GMS 2, Collector 2, UI/Caching 2 (GCU 2)
2
Compute 3
3
Compute 4
4
Insta 2
Verifying Physical Connections to the Fabric Interconnects
Verify the physical connections between the Fabric Interconnects and other
hardware components.
Ver i f y i ng C onnecti ons to the M anagement Networ k
Verify that the physical connections between the Fabric Interconnects and the
management network match the following figure and table. In particular, verify
the following:
l
The Management Ethernet port, mgmt0 (Port 5 in the following figure and
table), is connected to an external hub, switch, or router.
l
The L1 ports (Port 3 in the following figure and table) on both Fabric
Interconnects are directly connected to each other.
10
Copyright © 2014, Cisco Systems, Inc.
MURAL Installation Guide for Starter Pack
l
L2 ports (Port 4 in the following figure and table) on both Fabric
Interconnects are directly connected to each other.
Number
Description
1
Beaconing LED and Reset button
2
System status LED
3
UCS cross-connect port L1
4
UCS cross-connect port L2
5
Network management port
6
Console port
7
USB port
11
MURAL Installation Guide for Starter Pack
Ver i f y i ng C onnecti ons to the C hassi s
Verify that the physical connections between the Fabric Interconnects and the
Blade Server chassis match the following figure and tables.
The following table indicates how to connect the ports in the Cisco UCS Fabric
Extenders for Chassis 1 to the ports on Fabric Interconnects A and B. Consult the
bill of materials (BOM) to determine which model of Fabric Extender is specified
for your deployment.
Chassis 1 Fabric Extenders
Fabric Interconnects
Extender 1
Interconnect A
Port 1
Port 1
Port 2
Port 2
Port 3
Port 3
Port 4
Port 4
Extender 2
Interconnect B
Port 1
Port 5
Port 2
Port 6
Port 3
Port 7
12
Copyright © 2014, Cisco Systems, Inc.
MURAL Installation Guide for Starter Pack
Chassis 1 Fabric Extenders
Fabric Interconnects
Port 4
Port 8
The following table indicates how to connect the ports in the Cisco UCS Fabric
Extenders for Chassis 2 to the ports on Fabric Interconnects A and B.
Chassis 2 Fabric Extenders
Fabric Interconnects
Extender 1
Interconnect B
Port 1
Port 1
Port 2
Port 2
Port 3
Port 3
Port 4
Port 4
Extender 2
Interconnect A
Port 1
Port 5
Port 2
Port 6
Port 3
Port 7
Port 4
Port 8
Ver i f y i ng C onnecti ons to the SAN
Verify that the physical connections between the Fabric Interconnects and the
SAN match the following figure and table:
13
MURAL Installation Guide for Starter Pack
Port
Description
SP-A management port
Customer management switch
SP-B management port
Customer management switch
SP-A FC-A
Fabric A—Port 31
SP-A FC-B
Fabric B—Port 31
SP-B FC-A
Fabric A—Port 32
SP-A FC-B
Fabric B—Port 32
Ver i f y i ng C onnecti ons to the UC S Networ ks
Verify that the physical connections between the Fabric Interconnects and the
UCS networks match the following figure and table:
Port
Description
Fabric A—Port 17
Customer production network
Fabric B—Port 17
Customer production network
Fabric A—Port 18
Customer secondary production switch (optional)
Fabric B—Port 18
Customer secondary production switch (optional)
14
Copyright © 2014, Cisco Systems, Inc.
MURAL Installation Guide for Starter Pack
Configuring Base UCS Software Settings
To set the admin password and management port IP address, set up a cluster for
the Fabric Interconnects, and specify the default gateway, perform the following
steps:
1. On the laptop or console server from which you are connecting to the Fabric
Interconnects, set the console port parameters to the following values:
l
9600 baud
l
8 data bits
l
No parity
l
1 stop bit
2. Connect to the console port of Fabric Interconnect A.
3. At the prompts, type values that comply with the following instructions:
l
Do not include the -A or -B suffix on the system name (UCS-name
variable).
l
For the cluster IPv4 address (Virtual-IP-address-for-FI-cluster
variable), we recommend setting an address on the management
subnet.
l
Setting the DNS IPv4 address (DNS-IP-address variable) and default
domain name is optional.
Enter the configuration method (console/gui) ? console
Enter the setup mode; setup newly or restore from backup.
(setup/restore) ? setup
You have chosen to setup a new Fabric interconnect. Continue? (y/n): y
Enforce strong password? (y/n) [y]: n
Enter the password for "admin": admin-password
Confirm the password for "admin": admin-password
Is this Fabric Interconnect part of a cluster (select 'no'
for standalone)? (y/n): y
15
MURAL Installation Guide for Starter Pack
Enter the switch fabric (A/B) []: A
Enter the system name: UCS-name
Physical Switch Mgmt0 IPv4 address : Fab-A-mgmt-port-IP-address
Physical Switch Mgmt0 IPv4 netmask : Fab-A-mgmt-port-IP-netmask
IPv4 default gateway: gateway-address-in-mgmt-subnet
Cluster IPv4 address: FI-cluster-virtual-IP-address
Configure the DNS Server IPv4 address? (yes/no) [n]: y
DNS IPv4 address : DNS-IP-address
Configure the default domain name? (yes/no) [n]: n
Default domain name : cisco.com
Following configurations will be applied:
Switch Fabric=A
System Name=UCS-name
Enforced Strong Password=no
Physical Switch Mgmt0 IP Address=Fabric-A-mgmt-port-IP-address
Physical Switch Mgmt0 IP Netmask=Fabric-A-mgmt-port-IP-netmask
Default Gateway=gateway-address-in-mgmt-subnet
DNS Server=DNS-IP-address
Domain Name=cisco.com
Cluster Enabled=yes
Cluster IP Address=Virtual-IP-address-for-FI-cluster
NOTE: Cluster IP will be configured only after both Fabric
Interconnects are initialized
Apply and save the configuration (select 'no' if you want to re-enter)?
(yes/no): yes
Applying configuration. Please wait.
Configuration file - Ok
4. Connect to the console port of Fabric Interconnect B, and verify the
redundancy cables to Fabric Interconnect A are connected. At the prompts,
set the following parameters.
16
Copyright © 2014, Cisco Systems, Inc.
MURAL Installation Guide for Starter Pack
Enter the configuration method (console/gui) ? console
Installer has detected the presence of a peer Fabric interconnect. This
Fabric interconnect will be added to the cluster. Continue (y/n) ? y
Enter the admin password of the peer Fabric interconnect: adminpassword
Connecting to peer Fabric interconnect... done
Retrieving config from peer Fabric interconnect... done
Peer Fabric interconnect Mgmt0 IP Address: Fab-A-mgmt-port-IP-address
Peer Fabric interconnect Mgmt0 IP Netmask: Fab-A-mgmt-port-IP-netmask
Cluster IP address: FI-cluster-virtual-IP-address
Physical Switch Mgmt0 IPv4 address : Fab-B-mgmt-port-IP-address
Apply and save the configuration (select 'no' if you want to re-enter)?
(yes/no): yes
Applying configuration. Please wait.
Configuration file - Ok
You can now log in to the UCS management UI from a web browser at
http://Virtual-IP-address-for-FI-cluster.
Proceed to "Configuring UCS for MURAL" on page 19.
17
Configuring UCS for MURAL
To complete the configuration of the Cisco Unified Computing System (UCS) for
Mobility Unified Reporting and Analytics (MURAL), run the MURAL configuration
scripts for UCS, configure Direct Attached Storage (DAS) in UCS, and set system
profile settings for the UCS Manager.
The ucs-config.exp script simplifies UCS configuration by setting parameters for
the Fabric Interconnects, blades, LAN, and storage area network (SAN). Obtain
the following files from either Cisco Advanced Services or Technical Support:
l
ucs-config.exp (the script)
l
ucs-config-version-number.txt (where version-number is the most
recent version available),which specifies the values parameters set by the
script. You must update the file with information specific to your
deployment.
To run the UCS configuration script, perform the following steps:
1. Edit the ucs-config-version-number.txt file, modifying each value that is
marked with a Customer Information Questionnaire (CIQ) label to match
the value in your CIQ.
2. Save the modified ucs-config-version-number.txt file in the same
directory, renaming it to ucs.txt.
3. On a Cygwin, Linux, or Mac terminal, run the following command to run the
ucs-config.exp script. For the ucs-admin-password variable, substitute
the password you set in "Configuring Base UCS Software Settings" on
page 15.
# ./ucs-config.exp ucs-management-ip-address
ucs-admin-password
If the script encounters an error, you can recover by resetting the UCS to
defaults.ssh from the UCS manager. You need to do this for both the A
and B sides.
19
MURAL Installation Guide for Starter Pack
Configuring Direct Attachments to External SANs
This section describes how to configure Direct Attached Storage (DAS) in UCS,
which enables you to directly attach a fibre-channel SAN to the Fabric
Interconnects. Complete the tasks described in the following sections:
l
"Setting Fabric Interconnects to FC Switch Mode" below
l
"Creating VSANs for Zoning" on the facing page
l
"Designating Storage Ports and Assigning Storage Cloud VSANs" on page 25
l
"Confirming the Storage Port WWPN is Logged Into the Fabric
Interconnects" on page 26
l
"Creating Storage Connection Policies" on page 28
l
"Creating SAN Connectivity Policy" on page 30
l
"Configuring SAN Cloud Policy" on page 32
l
"Creating vHBA Initiator Groups" on page 31
l
"Verifying Service Profile Templates" on page 34
l
"Configuring System Profiles for the Cisco UCS Manager" on page 36
You use the UCS Manager interface to perform most of the tasks. For more
detailed information about it, see the Cisco UCS Manager GUI Configuration
Guide, Release 2.1.
Setting Fabric Interconnects to FC Switch Mode
To set the Fabric Interconnects to FC Switch mode, perform the following steps:
1. Log in to the UCS Manager. Click the Equipment tab at the top of the lefthand navigation pane, then navigate to Fabric Interconnects >
Fabric Interconnect identifier, where identifier is a letter like A in the
following figure. Open the General tab in the main pane.
20
Copyright © 2014, Cisco Systems, Inc.
MURAL Installation Guide for Starter Pack
2. In the Actions box, select both Set Ethernet Switching Mode and
Set FC Switching Mode (in the following figure, the latter does not
appear because the list of choices extends beyond the bottom of the box).
3. Click the Save Changes button.
4. If the value in the FC Mode field in the Status box is not Switch, set it to
Switch and reboot the system.
5. Repeat steps 1 through 4 for the other Fabric Interconnects.
Creating VSANs for Zoning
Create one virtual SAN (VSAN) for each Fabric Interconnect.
By convention, the name of a VSAN includes the associated ID, in the format
vsanID. Note the following restrictions on IDs for VSANs, including storage
VSANs, which determine the names you can use:
l
ID 4079 (VSAN name vsan4079) is reserved cannot be used in either FC
Switching mode or FC End-Host mode.
21
MURAL Installation Guide for Starter Pack
l
Do not assign IDs from the range 3040 through 4078 (VSAN names
vsan3040 through vsan4078), which are not operational when Fabric
Interconnects operate in FC switch mode. (You configured FC Switch mode
in "Setting Fabric Interconnects to FC Switch Mode" on page 20.) The Cisco
UCS Manager marks them with an error and raises a fault.
To create a regular VSAN (in the SAN Cloud) for a Fabric Interconnect, perform
the following steps:
1. In the UCS Manager, click the SAN tab at the top of the left-hand navigation
pane, then navigate to SAN Cloud > Fabric identifier > VSANs, where
identifier is a letter like A in the following figure. Click the General tab in
the main pane.
2. Right-click on VSANs in the navigation pane and select Create VSAN.
3. In the window that pops up, enter a VSAN name that complies with the
restrictions listed above, and specify the fabric identifier used in Step 1.
4. In the Properties box in the main pane, enter the same ID number in both
the ID and FCoE VLAN ID fields, as shown in the following figure.
22
Copyright © 2014, Cisco Systems, Inc.
MURAL Installation Guide for Starter Pack
5. In the FC Zoning Settings box in the main pane, click the Enabled radio
button if it is not already selected.
6. Click the Save Changes button.
7. Repeat Steps 1 through 6 for the other Fabric identifier nodes under
SAN Cloud.
8. To create a storage VSAN (in the Storage Cloud) for a Fabric Interconnect,
navigate to Storage Cloud > Fabric identifier > VSANs, and repeat
Steps 1 through 7.
23
MURAL Installation Guide for Starter Pack
The following sample figure shows the navigation pane after VSANs are created
for Fabric Interconnects Fabric A and Fabric B under SAN Cloud and Storage
Cloud. As indicated, you can use the same VSAN ID in both clouds. If you choose
to assign VSAN IDs 3010 and 3020 as in the figure, verify that they are not
already used elsewhere in your network.
24
Copyright © 2014, Cisco Systems, Inc.
MURAL Installation Guide for Starter Pack
Designating Storage Ports and Assigning Storage Cloud VSANs
For each Fabric Interconnect, configure the ports connecting to the storage array
as type FC, then designate them as FC Storage Ports and assign a Storage Cloud
VSAN to them.
To designate storage ports, perform the following steps:
1. In the UCS Manager, click the Equipment tab at the top of the left-hand
navigation bar, then navigate to Fabric Interconnects >
Fabric Interconnect identifier, where identifier is a letter like A. Rightclick Fabric Interconnect identifier and select Configure Unified
Ports to open the pop-up window shown in the following figure.
2. Use the slider to configure the ports connecting to the storage array as type
FC.
3. Repeat steps 1 and 2 for the other Fabric Interconnects.
4. Wait until all Fabric Interconnects have rebooted.
5. Navigate back to the first Fiber Interconnect identifier, then to
Fixed Module > FC Ports > FC Port 31.
25
MURAL Installation Guide for Starter Pack
6. In the main pane, make the following settings:
a. In the Actions box, select Configure as FC Storage Port.
b. In the Properties box, select the appropriate VSAN from the VSAN
drop-down menu. In the following figure, vsan3010 is selected for
Fabric Interconnect A.
7. Repeat Step 6 for FC Port 32.
8. Click the Save Changes button.
9. Repeat Steps 5 through 8 for the other Fabric Interconnects.
Confirming the Storage Port WWPN is Logged Into the Fabric
Interconnects
Zoning enables access control between storage devices and user groups. Creating
zones increases network security and prevents data loss or corruption. A zone set
consists of one or more zones in a VSAN.
To confirm the storage port is logged into the Fabric Interconnects, perform the
following steps:
1. Use SSH to log in as admin to the virtual IP address of the Fabric
Interconnect.
26
Copyright © 2014, Cisco Systems, Inc.
MURAL Installation Guide for Starter Pack
2. Run the connect nxos command to enter the NX-OS CLI.
# connect nxos
Cisco Nexus Operating System (NX-OS) Software
TAC support: http://www.cisco.com/tac
Copyright (c) 2002-2013, Cisco Systems, Inc. All rights reserved.
...
(nxos)#
3. Run the show zoneset active command to display the active zonesets.
(nxos)# show zoneset active
zoneset name hostname-vsan-3010-zoneset vsan 3010
zone name hostname_A_8_UI1_vHBA-A vsan 3010
* fcid 0x6c0000 [pwwn 20:00:00:05:ad:1e:11:2f]
* fcid 0x6c00ef [pwwn 50:06:01:68:3e:e0:0a:6b]
* fcid 0x6c01ef [pwwn 50:06:01:60:3e:e0:0a:6b]
zone name hostname_A_7_GMS1_vHBA-A vsan 3010
* fcid 0x6c0003 pwwn 20:00:00:05:ad:1e:11:4f
* fcid 0x6c00ef [pwwn 50:06:01:68:3e:e0:0a:6b]
* fcid 0x6c01ef [pwwn 50:06:01:60:3e:e0:0a:6b]
zone name hostname_A_6_INSTA1_vHBA-A vsan 3010
* fcid 0x6c0004 [pwwn 20:00:00:05:ad:1e:11:7f]
* fcid 0x6c00ef [pwwn 50:06:01:68:3e:e0:0a:6b]
* fcid 0x6c01ef [pwwn 50:06:01:60:3e:e0:0a:6b]
zone name hostname_A_5_INSTA2_vHBA-A vsan 3010
* fcid 0x6c0005 [pwwn 20:00:00:05:ad:1e:11:5f]
* fcid 0x6c00ef [pwwn 50:06:01:68:3e:e0:0a:6b]
* fcid 0x6c01ef [pwwn 50:06:01:60:3e:e0:0a:6b]
27
MURAL Installation Guide for Starter Pack
4. Run the show flogi database vsan vsan-ID command, where vsan-ID is
the identifier for the VSAN. In the following example, the VSAN ID for
Fabric Interconnect A is 3010. Make a note of the world wide port numbers
(WWPNs) in the PORT NAME column, which are used in "Creating Storage
Connection Policies" below.
(nxos)# show flogi database vsan 3010
INTERFACE
VSAN
FCID
PORT NAME
NODE NAME
----------------------------------------------------------------------fc1/31
3010
0x1000ef
50:06:01:60:3e:a0:28:d2
50:06:01:60:be:a0:28:d2
fc1/32
3010
0x1001ef
50:06:01:69:3e:a0:28:d2
50:06:01:60:be:a0:28:d2
5. Run the exit command.
(nxos)# exit
6. Repeat Steps 1 through 5 on the other Fabric Interconnects.
The following sample show flogi database vsan vsan-ID command uses
the VSAN ID for Fabric Interconnect B, 3020.
(nxos)# show flogi database vsan 3020
INTERFACE
VSAN
FCID
PORT NAME
NODE NAME
----------------------------------------------------------------------fc1/31
3020
0x4200ef
50:06:01:61:3e:a0:28:d2
50:06:01:60:be:a0:28:d2
fc1/32
3020
0x4201ef
50:06:01:68:3e:a0:28:d2
50:06:01:60:be:a0:28:d2
Creating Storage Connection Policies
To create a storage connection policy for each Fabric Interconnect, perform the
following steps:
1. In the UCS Manager, click the SAN tab at the top of the left-hand navigation
bar, then navigate to Policies > root. Right-click
Storage Connection Policies and select
Create Storage Connection Policies.
28
Copyright © 2014, Cisco Systems, Inc.
MURAL Installation Guide for Starter Pack
2. In the window that pops up, make the following settings. The FI-ID variable
is the Fabric Interconnect identifier, such as A.
l
For Name—storage-conn-polFI-ID
l
For Add FC Target Endpoints—The WWPN displayed for port 31 in
the output from the show flogi database vsan vsan-ID command
in "Confirming the Storage Port WWPN is Logged Into the Fabric
Interconnects" on page 26
l
For Path—FI-ID
l
For VSAN—The VSAN created for the Fabric Interconnect in "Creating
VSANs for Zoning" on page 21, such as vsan3010 for Fabric
Interconnect A
3. Repeat Step 2 for port 32.
4. In the Zoning Type field in the Properties box in the main pane, click the
Single Initiator Multiple Targets radio button.
5. Click the Save Changes button.
6. Repeat Steps 1 through 5 to create storage connection policies for the other
Fabric Interconnects.
29
MURAL Installation Guide for Starter Pack
The following example figure shows the result for Fabric Interconnect A. The
settings from Step 2 are recorded in the Fc Target Endpoints box.
Creating SAN Connectivity Policy
A virtual host bus adapter (vHBA) logically connects a virtual machine to a virtual
interface on the Fabric Interconnect and allows the virtual machine to send and
receive traffic through that interface. You must create a vHBA initiator group for
each vHBA.
Connectivity policies determine the connections and the network communication
resources between the server and the LAN or SAN on the network. These policies
use pools to assign MAC addresses, world wide nubmers (WWNs), and WWPNs to
servers and to identify the vNICs and vHBAs that the servers use to communicate
with the network.
If you want to support a VSAN, it needs to be configured globally into the UCS
Manager, and then it can be associated with a particular vHBA.
30
Copyright © 2014, Cisco Systems, Inc.
MURAL Installation Guide for Starter Pack
To create a vHBA initiator group for the storage connectivity policy, perform the
following steps:
1. In the UCS Manager, click the SAN tab at the top of the left-hand navigation
bar, then navigate to Policies > root. Right-click
Storage Connection Policies and select
Create SAN Connectivity Policy.
2. Click Add. Set the following values:
l
Name—vHBA-A or a name that complies with the naming
conventions in your deployment
l
WWNN Assignment—Select the appropriate value from the dropdown menu (for example, wwnn-pool1(768/784))
l
Fabric ID—A
l
Select VSAN—The VSAN created for the Fabric Interconnect in
"Creating VSANs for Zoning" on page 21, such as vsan3010
l
Adapter Policy— VMWare
3. Click the Save Changes button.
4. Repeat Steps 1 through 3 to create a SAN connectivity policy for the other
vHBA.
Creating vHBA Initiator Groups
To create a vHBA initiator group for the storage connectivity policy, perform the
following steps:
1. In the UCS Manager, click the SAN tab at the top of the left-hand navigation
bar, then navigate to Policies > root > SAN Connectivity Policies.
2. Add SAN Connectivity Policies for Fabric Interconnects A and B (the
preceding example shows SAN-con-pol-A).
3. Select SAN Connectivity Policies, for example: SAN-con-pol-A.
31
MURAL Installation Guide for Starter Pack
4. Add values like the following: l
Name—vHBA-init-grp-A
l
Select vHBA Initiators—vHBA-B
l
Storage Connection Policy—Storage-con-polB
5. Click OK to save changes.
6. Repeat the preceding steps for the other Fabric Interconnects.
Configuring SAN Cloud Policy
The following sample SAN cloud policy figures show how a single SAN
connectivity policy can have two vHBA initiator groups (SAN-con-pol-A has
vHBA-A and vHBA-b), or one vHBA initiator group (SAN-con-pol-B has one).
32
Copyright © 2014, Cisco Systems, Inc.
MURAL Installation Guide for Starter Pack
33
MURAL Installation Guide for Starter Pack
Verifying Service Profile Templates
When vHBA initiator groups are created, information about them is added to
service profile templates.
To verify service profile templates, perform the following steps:
1. In the UCS Manager, click the Servers tab at the top of the left-hand
navigation bar, then navigate to Service Profile Templates > root >
Service Template template-name > vHBAs.
34
Copyright © 2014, Cisco Systems, Inc.
MURAL Installation Guide for Starter Pack
2. Select SAN Connectivity Policy. Verify that vHBAs have been applied to
the service profile template, and that all details are correct.
3. Click the Save Changes button.
4. Repeat steps 1 through 3 for the other vHBAs.
The following figure shows a vHBA configuration within a service template.
Configuration of DAS is complete.
35
MURAL Installation Guide for Starter Pack
Configuring System Profiles for the Cisco UCS Manager
Configure profiles in the Cisco UCS Manager that modify the default settings of
hardware systems in accordance with the following recommended settings.
Configuring Ethernet Adapter Policy
To configure Ethernet adapter policy for all Ethernet interfaces, perform the
following steps:
1. In the UCS Manager, click the Servers tab at the top of the left-hand
navigation bar, then navigate to Policies > root >
Eth Adapter Policy Default > General .
2. Enter the field values as shown in the following table. Fields with a value of
Default do not need to be changed.
Note: Ensure that the Resources and Options values are set correctly, as
recommended in the table.
Field Name
Field Value
Transmit Queues
1
Ring Size
Default
Receive Queues
8
Ring Size
Default
Completion Queues
9
Interrupts
16
Transmit Checksum
Enabled
Offload
Receive Checksum
Enabled
Offload
TCP Segmentation
Enabled
Offload
TCP Large Receive Offload Enabled
36
Receive Side Scaling
Enabled
Fallback Timeout
Default
Interrupt Mode
MSI X
Copyright © 2014, Cisco Systems, Inc.
MURAL Installation Guide for Starter Pack
Field Name
Field Value
Interrupt Coalescing type Min
Interrupt Timer
350
3. Click the Save Changes button.
Configuring BIOS Policy
To configure BIOS policy, perform the following steps:
1. In the UCS Manager, click the Servers tab at the top of the left-hand
navigation bar, then navigate to Policies > root > BIOS Policies >
mural-bios > Advanced > Processor.
2. Enter the field values as shown in the following table. Fields with a value of
Default do not need to be changed.
Field Name
Field Value
Turbo Boost
Disabled
Enhanced Intel
Enabled
SpeedStep
Hyper Threading
Default
Core Multiprocessing
all
Execute Disable Bit
Default
Virtualization Technology Disabled (Enabled if VMs are expected to be run
on the systems)
Direct Cache Access
Enabled
Processor C State
Disabled
Processor C1E
Enabled
Processor C3 Report
Default
Processor C6 Report
Default
Processor C7 Report
Default
CPU Performance
hpc
Max variable MTRR
Default
Setting
3. Click the Save Changes button.
37
MURAL Installation Guide for Starter Pack
Setting the Boot Order of Devices
To set the boot order for devices, perform the following steps:
1. In the UCS Manager, click the Servers tab at the top of the left-hand
navigation bar, then navigate to Policies > root > Boot Policies >
Boot Policy policy-name, where policy-name is the policy configured
for your deployment. Open the General tab in the main pane.
2. In the Boot Order box in the main pane, put the following items in the
indicated order.
a. CD-ROM (if present)
b. Storage > Local Disk
c. LAN > LAN vnic0
d. LAN > LAN vnic1 (omitted in a single-chassis deployment)
e. Any other devices
3. Click the Save Changes button.
38
Copyright © 2014, Cisco Systems, Inc.
MURAL Installation Guide for Starter Pack
Setting the RAID Policy
Configure local disk policy for blades to use RAID 1 mirrored local disks. This is a
requirement in a production deployment; in a test or lab deployment it might not
be necessary.
MURAL has been tested and validated with MegaRAID storage controllers for the
blades. If using those controllers, do not set the configuration mode to Any.
To set the RAID policy, perform the following steps:
1. In the UCS Manager, click the Servers tab at the top of the left-hand
navigation bar, then navigate to Policies > root >
Local Disk Config Policies > Local Disk.
2. Select Configuration Policy Default > General .
3. In the Properties box of the General tab in the main pane, select
RAID 1 Mirrored from the Mode drop-down menu, as shown in the
figure.
4. Click the Save Changes button.
Proceed to "Installing and Configuring a VM for GMS Lite" on page 41.
39
Installing and Configuring a VM for GMS Lite
This topic describes how to install and configure a virtual machine (VM)
environment on the laptop, in which you run the General Management System
(GMS) Lite application. You install the Oracle VM VirtualBox software, download
the Vdisk image provided in the MURAL software distribution, set VM parameters,
and initialize a VM.
If a VM is already available on the laptop, skip this topic and proceed to to
"Defining the Deployment Configuration File Using GMS Lite" on page 53.
Before You Begin
1. Ensure that the following functionality is available on the laptop:
l
File decompression utility such as rar or unzip
l
Bridge Connectivity Support from LAN over Ethernet (en0/LAN) port
2. Ensure that the laptop meets the following minimum requirements:
l
Operating system—64-bit (MacOS or Windows)
l
Hard disk—50 gigabytes (GB) of space available
l
CPUs—2
l
RAM—2GB
l
Network port (eth0)—Supports 100 megabits per second (Mbps) full
duplex
Installing the Vdisk Image and VM Software
Working on the laptop on which GMS Lite is to run, perform the following steps:
1. Obtain the Vdisk image, which has a name similar to
vm35rc1Copy.qcow.gz, from the MURAL software distribution. Download
it to a local disk directory.
2. Uncompress the Vdisk image in a local disk directory.
41
MURAL Installation Guide for Starter Pack
3. Download version 4.3.10 or later of the Oracle VM VirtualBox software from
the oracle.com website, and install it.
Creating and Configuring the VM for GMS Lite
To create and configure the VM for GMS Lite, perform the following steps:
1. Launch the Oracle VM VirtualBox Manager and click the New icon. (In the
figure, there is an existing VM called VM1.)
2. In the Create Virtual Machine window that pops up, make the following
settings on the Name and operating system screen, then click the Next
button:
42
l
Name—The VM name (GMS-Lite in the figure)
l
Type—Linux
l
Version—Other Linux (64-bit)
Copyright © 2014, Cisco Systems, Inc.
MURAL Installation Guide for Starter Pack
3. On the Memory size screen, use the slider to set a minimum value of
2048 MB. Click the Next button.
43
MURAL Installation Guide for Starter Pack
4. On the Hard drive screen, click the Use an existing virtual hard
drive file radio button if it is not already selected. Click the folder icon to
the right of the drop-down menu.
44
Copyright © 2014, Cisco Systems, Inc.
MURAL Installation Guide for Starter Pack
5. In the Please choose a virtual hard drive file window that pops up,
navigate to the directory that houses the Vdisk image file that you
downloaded and uncompressed in "Installing the Vdisk Image and VM
Software" on page 41. Click the filename to select it and click the Open
button.
6. On the Hard drive screen, click the Create button.
45
MURAL Installation Guide for Starter Pack
7. Click the newly created VM in the navigation pane, if it is not already
selected, then click the Settings icon.
46
Copyright © 2014, Cisco Systems, Inc.
MURAL Installation Guide for Starter Pack
8. On the Settings screen, click the Storage icon in the navigation pane.
9. Make the following changes in the Storage Tree box:
a. Remove the Vdisk from the Controller: IDE hierarchy.
b. Create a Controller: SCSI hierarchy level and place the Vdisk under
it.
c. For the SCSI controller, click the Use Host I/O Cache check box to
add the check (see the preceding figure).
47
MURAL Installation Guide for Starter Pack
The following figure depicts the result when the Vdisk is selected.
10. Click the Network icon in the navigation pane, and open the Adapter 1 tab
in the main pane. Make the following selections from the drop-down menus
in the indicated fields:
a. Attached to—Bridged Adapter
b. Name—An Ethernet LAN adapter
c. Promiscuous Mode—Allow VMs
48
Copyright © 2014, Cisco Systems, Inc.
MURAL Installation Guide for Starter Pack
11. Click the Serial Ports icon in the navigation pane, and open the Port 1
tab. Click the Enable Serial Port check box to add the check.
12. Click the OK button.
13. Click the VM in the navigation pane, if it is not already selected, then click
the Start icon.
49
MURAL Installation Guide for Starter Pack
14. In the terminal window that pops up after the VM initializes, log in as
admin and enter configure mode.
> en
# conf t
15. Set the admin password. We recommend the value admin@123, which is
the standard value that Technical Support expects to use when they log in
to a blade to help you with a problem.
(config)# username admin password admin@123
(config)# write memory
16. Assign IP addresses to the management interface (eth0) and default
gateway.
(config) # interface eth0 ip address gms-lite-ip-address
/subnet-mask
50
Copyright © 2014, Cisco Systems, Inc.
MURAL Installation Guide for Starter Pack
(config) # ip default-gateway mgmt-network-default-gateway-ipaddress
(config) # write memory
(config) # _shell
For example:
(config) # interface eth0 ip address 192.168.103.78 /24
(config) # ip default-gateway 192.168.103.1
(config) # write memory
(config) # _shell
17. Run the fdisk command to identify the disk volume for swap space, then
run the indicated commands to re-create and enable swap.
# fdisk -l | grep -i swap
/dev/sda7
1098
1620
4200997
82
Linux swap /
Solaris
# free -m
# mkswap /dev/sda7
# swapon -a
# free –m
18. Verify that the following versions of software are installed on the laptop:
l
Java Plug-in 10.40.2.43 or above
l
JRE version 1.7.0_40-b43 or above
19. Start the GMS Lite server process.
(config)# pm process gms_server restart
Proceed to "Defining the Deployment Configuration File Using GMS Lite" on
page 53.
51
Defining the Deployment Configuration File Using GMS
Lite
This topic explains how to use the General Management System (GMS) Lite
application to define the configuration of the deployment from a single node
instead of having to install and configure software manually on each blade.
The GMS Lite application runs on a laptop and serves two purposes during the
installation process:
l
Provides the interface for preparing the Mobility Unified Reporting and
Analytics (MURAL) configuration file, mural.xml , in advance of on-site
installation, so that you do not have to interrupt the installation process to
generate it. During the on-site installation, you transfer the file to the
Master GCU node for use during configuration of other nodes.
l
Acts as a Pre-eXecution Environment (PXE) boot server from which nodes
fetch the MURAL ISO image file during the blade manufacturing process, as
described in "Manufacturing the Master GCU Blade Using GMS Lite" on
page 81.
Notes:
l
The mural.xml configuration file in the MURAL installation package is
specific to the software version for your deployment and is customized
based on information in your CIQ. The file defines configuration settings for
all nodes and for node clusters where applicable. In most cases you only
need to verify that the settings are correct. You might need to modify
settings if, for example, the slot numbers for blades have changed or some
information was not yet known when the CIQ was completed.
Note: If you must modify the file, we strongly recommend that you use the
GMS interface; do not modify the file directly in a text editor.
The MURAL installation package also include node profiles, and in some
cases you select from among the different profiles the one that provides
optimal settings for your deployment.
53
MURAL Installation Guide for Starter Pack
l
The GMS interface requires Java Plug-in 10.40.2.43 or above, and JRE
version 1.7.0_40-b43 or above.
l
The VLANs and Bonds tabs in the GMS interface are not used in this
version of MURAL.
l
In a Starter Pack deployment, you configure only one Ethernet port (by
convention eth0 or eth1) on each blade for connection to the Management
Network.
Before you begin, locate your Customer Information Questionnaire (CIQ)—for
some of the steps in this procedure, you will need to refer to the information on
the IP Survey tab.
Initializing the GMS Lite User Interface
To initialize the GMS Lite user interface, perform the following steps:
1. In a browser, access the following URL:
http://GMS-Lite-Server-IP-Address/configure
2. Log in as admin (the recommended password is admin@123).
54
Copyright © 2014, Cisco Systems, Inc.
MURAL Installation Guide for Starter Pack
3. In the Java applet window that pops up, log in as admin with the same
password.
4. On the screen depicted in the following figure, click the file navigation
button (...) to the right of the top field, navigate to the mural.xml
configuration file, and click the Load Config File button.
When the configuration file is loaded, the GMS Lite UI appears. Fields on each tab
are populated with values from the mural.xml configuration file that are specific
55
MURAL Installation Guide for Starter Pack
to your deployment, so as you proceed through the following sections in many
cases all you need to do is verify that settings are correct:
l
"Verifying System Settings" below
l
"Verifying MAC Addresses of Blades" on the facing page
l
"Verifying World Wide Identifiers for LUNs" on page 58
l
"Verifying Node and Node Cluster Configuration" on page 60
l
"Assigning Application Profiles to Nodes" on page 63
l
"Validating the Configuration" on page 79
Note: The figures in this topic show sample values only. Values in your
deployment are different for node names, blade names, MAC addresses, and so
on.
Verifying System Settings
To verify system settings that apply to the entire MURAL system (DNS, default
gateway, and NTP), open the Global Settings tab and verify the values in the
following fields:
56
l
DNS Server
l
Default Gateway
l
NTP Server IP
l
NTP Version
Copyright © 2014, Cisco Systems, Inc.
MURAL Installation Guide for Starter Pack
Verifying MAC Addresses of Blades
Verify that the MAC addresses recorded in the configuration file for all blades in
the chassis match the values in the Cisco Unified Computing System (UCS).
To verify the MAC address for each blade, perform the following steps:
1. Log in to the Cisco Unified Computing System (UCS) Manager. Click the
Servers tab at the top of the left-hand navigation bar and navigate to
Service Profiles > root > blade-name > vNICs > Vnicindex, where
index is 0 for the eth0 interface and 1 for the eth1 interface. Repeat for
each blade-name, noting its MAC address for use in Step 4.
2. In the GMS Lite interface, open the Server Details tab.
3. In the Chassis box, select the chassis.
4. In the Slot box, select each blade in turn. Verify that the MAC address in
the Interface Members box matches the value obtained in Step 1. If it
does not match, click the Add button to the right of the
Interface Members box and create a new entry with the correct MAC
57
MURAL Installation Guide for Starter Pack
address. Then select the incorrect entry and click the Delete button.
5. For a dual-chassis deployment, repeat Steps 3 and 4 for Chassis 2.
Verifying World Wide Identifiers for LUNs
A world wide identifier (WWID) uniquely identifies the storage allocated to each
MURAL node and is associated with the node's logical unit number (LUN). At this
point in the installation procedure, actual WWIDs are not yet configured for any
LUNs. You need only verify that the dummy placeholder WWIDs associated with
them are unique and have the proper format.
58
Copyright © 2014, Cisco Systems, Inc.
MURAL Installation Guide for Starter Pack
Perform the following steps:
1. Open the Server Details tab in the GMS interface, if it is not already.
2. In the Chassis box, select the chassis.
3. Select each blade in the Slot box in turn, and verify that the value in the
Storage_WWID column on each line in the Storage WWID(s) box has
total of 17 numerals and lowercase letters, begins with 3, and is unique
across all blades. If a WWID is not compliant, click the Add button to the
right of the Storage WWID(s) box to create a new entry with a compliant
WWID. Select the incorrect entry and click the Delete button.
If the current placeholder WWIDs are not compliant, you can derive new
ones based on the WWID for the master GMS node's LUN, by changing the
last three digits. For example, if the WWID for the master GMS node's LUN
is 360060160c7102f004887f4015d49e211, change 211 to 212 in the
first placeholder WWID (360...49e212), to 213 in the second placeholder
WWID (360...49e213), and so on.
59
MURAL Installation Guide for Starter Pack
4. For a dual-chassis deployment, repeat Steps 2 and 3 for Chassis 2.
Verifying Node and Node Cluster Configuration
Verify the configuration of nodes and node clusters.
Each node has one IP address for the Management Network. The nodes in a
cluster also share a virtual IP address on the (same) Management Network.
To verify node and cluster configuration, perform the following steps:
1. Open the Networks tab and verify the values in the following fields for the
Management Network:
60
l
Network Name
l
Network (IP address prefix and subnet)
l
Interface Type
Copyright © 2014, Cisco Systems, Inc.
MURAL Installation Guide for Starter Pack
l
Network Type
l
Network Interface
2. Open the Nodes tab. For each node in the upper Node box, verify the
settings for the Management Network in the Network IP and
Network Subnet columns of the Node IP Details box.
61
MURAL Installation Guide for Starter Pack
3. Open the Clusters tab. Select each cluster in the upper Cluster box in
turn, and verify that correct values appear for it in the following fields in
the lower Cluster area:
l
Cluster Type field—1+1 HA for the GCU and Insta nodes;
N Node Cluster for the Compute node
l
Cluster Name field
l
Cluster Interface field
l
Cluster VIP column in Cluster VIPs box—For GCU and Insta
clusters, an IP address on the same Management Network as for the
member nodes you verified in Step 2 (172.30.4 in the example).
In the following figure, the GCU node cluster called GCU-CLUS is selected.
62
Copyright © 2014, Cisco Systems, Inc.
MURAL Installation Guide for Starter Pack
Assigning Application Profiles to Nodes
Verify that application profiles are assigned to each node—Insta, Compute, and
GCU. You can then change individual settings as appropriate for your deployment.
Perform the following steps:
1. Open the Applications tab. The Application Name column in the
Application box indicates the application profile that is applied to the node
cluster in the Clusters column. The applications might be in a different
order in your deployment.
63
MURAL Installation Guide for Starter Pack
2. In the Application box, select the Compute application profile. Verify that
the Compute cluster is listed in the Application > Clusters > Cluster
box (in the following figure, COMP-CLUS in the middle box).
64
Copyright © 2014, Cisco Systems, Inc.
MURAL Installation Guide for Starter Pack
3. In the Application box, select the iNSTA application profile. Verify that the
Insta cluster is listed in the Application > Clusters > Cluster box (in the
following figure, INSTA-CLUS in the middle box).
65
MURAL Installation Guide for Starter Pack
4. In the Application box, select the PostgreSQL application profile for the
Insta cluster. Verify that the Insta cluster is listed in the Application >
Clusters > Cluster box (in the following figure, INSTA-CLUS in the
second box). The following figure includes the Profiles > Properties >
Property box at the bottom, which provides details about the postgresql_
custom_mural application profile selected in the Profiles box.
66
Copyright © 2014, Cisco Systems, Inc.
MURAL Installation Guide for Starter Pack
5. In the Application box, select the Collector application profile. Verify that
the GCU cluster is listed in the Application > Clusters > Cluster box (in
the following figure, GCU-CLUS in the middle box).
6. In the Profiles box, select the collector_custom_adaptor_edrflow_
edrhttp_template profile if it is not already. Verify that the values of the
following parameters have the indicated format:
l
adaptor.edrflow.output.directory (the output directory for EDR
flow data; line 4 in the figure)—
/data/collector/1/output/edrflow/%y/%m/%d/%h/%mi/gatewayname
l
adaptor.edrhttp.output.directory (the output directory for EDR
HTTP data; line 15 in the figure)—
67
MURAL Installation Guide for Starter Pack
/data/collector/1/output/edrhttp/%y/%m/%d/%h/%mi/gatewayname
7. In the Profiles box, select the collector_custom_adaptor_bulkstats_
template profile. Verify that the values of the following parameters have
the correct format:
l
adaptor.bulkStats.input.fileConfig.bulkStatsFile1.inputDirectory (the input directory for Bulkstats files; line 8 in the figure)—
/data/collector/bulkstats_files/gateway-name
l
adaptor.bulkStats.input.fileConfig.bulkStatsFile1.backupDirectory (the backup directory for Bulkstats files; line 9 in the figure)—
/data/collector/1/output/edrhttp/%y/%m/%d/%h/%mi/gatewayname
l
adaptor.bulkStats.input.fileConfig.bulkStatsFile1.fileNameFormat (the filename format for Bulkstats files; not shown in the
figure)—*_%YYYY%MM%DD%hh%mm%ss
68
Copyright © 2014, Cisco Systems, Inc.
MURAL Installation Guide for Starter Pack
8. Returning to the Application box, select the DFS application profile. Verify
that the GCU cluster is listed in the Application > Clusters > Cluster
box (in the following figure, GCU-CLUS in the middle box).
69
MURAL Installation Guide for Starter Pack
9. In the Application box, select the GMS application profile. Verify that the
GCU cluster is listed in the middle Application > Clusters > Cluster box
(in the following figure, GCU-CLUS in the middle box) .
70
Copyright © 2014, Cisco Systems, Inc.
MURAL Installation Guide for Starter Pack
10. In the Application box, select the PostgreSQL application profile for the
GCU cluster. Verify that the GCU cluster is listed in the Application >
Clusters > Cluster box (in the following figure, GCU-CLUS in the middle
box).
71
MURAL Installation Guide for Starter Pack
11. In the Application box, select the first Rubix application profile for the
GCU cluster (the value in the Application Instance column is 1). Verify
that the GCU cluster is listed in the Application > Clusters > Cluster
box (in the following figure, GCU-CLUS in the middle box).
12. In the Profiles box, select the rubix_custom_mural_dist profile if it is
not already. Verify that the following parameters have the indicated values,
as shown in the figure:
l
application.atlas.sessionCookieDomain—The domain name to be
used as the final portion of the URL. For example, if the URL is
https://cisco-mural.cisco.com, then .cisco.com is the correct
value.
l
application.atlas.timeZone—The time zone code for the location of
the ASR that is providing data, such as GMT for Greenwich Mean Time.
The following table lists some common time zone codes.
72
Copyright © 2014, Cisco Systems, Inc.
MURAL Installation Guide for Starter Pack
Time Zone Code
CST6CDT
Time Zone
United States central standard
time
EST5EDT
United States eastern standard
time
Chile/Continental
Chile time
America/Argentina/Buenos_Aires
Argentina time
CET
Central European time
GMT
Greenwich mean time
Note: The value must be the same for all Rubix application profiles.
l
application.atlas.rubixInstance.1.redirectPort—6443
l
application.atlas.rubixInstance.1.connectorPort—6080
l
application.atlas.rubixInstance.1.connectorPortAJP—6009
l
application.atlas.rubixInstance.1.serverPort—6005
l
application.atlas.rubixInstance.1.channelReceiverPort—4000
73
MURAL Installation Guide for Starter Pack
13. Returning to the Application box, select the second Rubix application
profile for the GCU cluster (the value in the Application Instance column
is 2). Verify that the GCU cluster is listed in the Application > Clusters >
Cluster box (in the following figure, GCU-CLUS in the middle box).
14. In the Profiles box, select the rubix_custom_mural_cacheless profile
if it is not already. Verify that the following parameters have the indicated
values, as shown in the figure:
l
application.reportAtlas.timeZone—The time zone code for the
location of the ASR that is providing data, such as GMT for Greenwich
Mean Time.
Note: The value must be the same for all Rubix application profiles
(the same as you set in Step 12).
74
Copyright © 2014, Cisco Systems, Inc.
MURAL Installation Guide for Starter Pack
l
application.reportAtlas.rgeUrl —The URL to be used to access the
MURAL UI, complete with the required port number, 30443; in the
example https://cisco-mural.cisco.com:30443 is the correct
value.
l
application.reportAtlas.BulkStatsURL—The same value as for the
application.reportAtlas.rgeUrl parameter, but with port 20443
instead of 30443; in the example https://cisco-
mural.cisco.com:20443 is the correct value.
15. In the Profiles box, select the rubix_custom_rge_mural profile. Verify
that the following parameters have the indicated values:
l
application.rge.mailNotificationHost—The customer mail
hostname, such as mx1.cisco.com
l
application.rge.mailNotificationSender —The destination email
address, such as admin@cisco.com
l
application.rge.rgekeyStoreAlias—app2.guavus.com
75
MURAL Installation Guide for Starter Pack
16. In the Profiles box, select the rubix_custom_bulkstats_mural profile.
Verify that the following parameters have the indicated values, as shown in
the figure:
l
application.bulkstats.numOwners— 2
l
application.bulkstats.sessionCookieDomain—The domain name
to be used as the final portion of the URL. For example, if the URL is
https://cisco-mural.cisco.com, then .cisco.com is the correct
value.
l
application.bulkstats.timeZone—The time zone code for the
location of the ASR that is providing data, such as GMT for Greenwich
Mean Time.
Note: The value must be the same for all Rubix application profiles
(the same as you set in Steps 12 and 14).
76
Copyright © 2014, Cisco Systems, Inc.
MURAL Installation Guide for Starter Pack
17. Returning to the Application box, select the Solution application profile.
Verify that the GCU cluster is listed in the Application > Clusters >
Cluster box (in the following figure, GCU-CLUS in the middle box).
77
MURAL Installation Guide for Starter Pack
18. In the Profiles box, select the solution_custom_mural profile if it is not
already. Verify that the following parameters have the indicated values, as
shown in the figure:
l
client— cisco
l
dataStartTime—Expected start time for data, in the format YYYYMM-DDThh:mmZ. It is fine to set the time at this point, because GMS
does not run all jobs. The start time is set in "Setting the Data Start
Time" on page 175 before all jobs are started.
l
78
isStarterPack—true
Copyright © 2014, Cisco Systems, Inc.
MURAL Installation Guide for Starter Pack
19. Returning to the Application box, select the Workflow application profile.
Verify that the GCU cluster is listed in the Application > Clusters >
Cluster box (in the following figure, GCU-CLUS in the middle box).
Validating the Configuration
After verifying all configurations on the Server Details, Global Settings,
Networks, Nodes, Clusters, and Applications tabs as described in the
previous sections, you must validate the configuration settings.
To validate the GMS configuration, perform the following steps:
1. Click the Validate button on the bottom bar of the GMS interface.
If any field or value is incorrect, a list of errors is displayed.
2. For each error in turn, click the error and then the Go button to display the
entry in the GMS configuration that is causing the error.
79
MURAL Installation Guide for Starter Pack
3. Correct any errors.
4. Click the Validate button again. When all errors have been resolved, the
message Validation successful displays on the screen.
5. Click the Save Local File button to save the configuration to a local
directory on the laptop. Make note of the location; you refer to it when you
copy the file to the master GCU blade in "Configuring and Starting the
Master GCU Node" on page 106.
Note: To activate a configuration after you have made changes, you must run the
gms config config-file.xml activate command. Until you do so, the system
runs the previously activated configuration.
Proceed to "Manufacturing the Master GCU Blade Using GMS Lite" on the facing
page.
80
Copyright © 2014, Cisco Systems, Inc.
MURAL Installation Guide for Starter Pack
Manufacturing the Master GCU Blade Using GMS Lite
This topic describes the recommended method for manufacturing the master GCU
blade, namely by using GMS Lite. (Manufacturing refers to loading the
Mobility Unified Reporting and Analytics (MURAL) software onto the blade.) To
manufacture the master GCU blade manually instead, skip this topic and proceed
to "Manually Manufacturing the Master GCU Blade" on page 187.
Prerequisites
Before beginning, verify that the following requirements are met:
l
VM software is installed on the laptop as described in "Installing and
Configuring a VM for GMS Lite" on page 41.
l
The laptop is on the same subnet as the Cisco UCS Blade Servers, Fabric
Interconnects, and other hardware components. This enables the blades to
fetch the required files from the laptop during Pre-eXecution Environment
(PXE) booting.
l
The mural.xml file specifies the correct settings for your deployment, as
described in "Defining the Deployment Configuration File Using GMS Lite"
on page 53.
l
All blades were configured with Serial over LAN (SOL) during EMC setup.
Before you begin, locate your Customer Information Questionnaire (CIQ)—for
some of the steps in this procedure, you need to refer to it.
Manufacturing the Master GCU Blade
To use GMS Lite to manufacture the master GCU blade, perform the following
steps:
1. On the laptop where GMS Lite is to run, download to a local disk directory
the ISO image included with the MURAL software package.
The ISO image filename is
/data/mfgcd-guavus-x86_64-20143015-133212.iso
81
MURAL Installation Guide for Starter Pack
To verify the MD5 checksum of the image, run the md5sum filename
command.
# md5sum /data/mfgcd-guavus-x86_64-20143015-133212.iso
a8eaa389e2027ece02ee467ff9af66ee /data/mfgcd-guavus-x86_64-20143015133212.iso
2. Launch the VM. In the terminal window that pops up after the VM initializes,
log in as admin and enter configure mode.
> en
# conf t
3. Start the GMS Lite server.
(config)# pm process gms_server restart
4. Activate the mural.xml configuration file.
(config)# gms config mural.xml activate
5. Fetch and mount the MURAL ISO image file. For the local-directory-path
variable, substitute the directory to which you downloaded the image file in
Step 1. A trace similar to the one shown after the image mount command
indicates that the mount operation is succeeding.
(config) # image fetch scp://laptop-IP-Address/local-directory-path/
mfgcd-guavus-x86_64-20140315-133212.iso
(config)# image mount mfgcd-guavus-x86_64-20140315-133212.iso
Copying linux...
Copying rootflop.img...
Copying image.img...
6. Verify that the image file has been copied to the standard directory.
(config) # ls /var/opt/tms/images
mfgcd-guavus-x86_64-20140315-133212.iso
7. Open the Cisco UCS - KVM Launch Manager in a browser and enter your
login credentials.
82
Copyright © 2014, Cisco Systems, Inc.
MURAL Installation Guide for Starter Pack
Note: For best performance, access the KVM Launch Manager in Firefox
with Java version 6 or greater.
The UCS - KVM Launch Manager application opens and displays all blades
available on the chassis.
8. Click the Launch button for the first blade (Server1 in the following
figure).
Click OK to download and open the kvm.jnlp file.
Click OK to clear the keyboard access warning message that appears.
The console for the port opens.
9. Navigate to KVM Console > Virtual Media, click Add Image, and
specify the path of the ISO image that you downloaded in Step 1.
10. Click the check box in the Mapped column for the added ISO image, which
is then mounted.
83
MURAL Installation Guide for Starter Pack
11. To reboot the blade to use the newly mounted image, access the KVM tab
and select Ctrl-Alt-Del from the Macros > Static Macros drop-down
menu.
12. When the boot screen appears, press F12 as quickly as possible to boot
84
Copyright © 2014, Cisco Systems, Inc.
MURAL Installation Guide for Starter Pack
from the network. A trace similar to the following appears.
Once the blade starts booting from the network, GMS Lite uses PXE boot to
push the MURAL software image file to the master GCU blade. The
manufacturing process takes approximately 20 minutes. Wait to continue
until the manufacturing process completes, which is signaled by the
appearance of a log-in prompt in the KVM Launch Manager terminal
window.
13. Stop the GMS Lite server process.
(config) # pm process gms_server terminate
85
MURAL Installation Guide for Starter Pack
Verifying Zoning and FLOGI on the Fabric Interconnects
To verify zoning and the fabric login (FLOGI) on the Fabric Interconnects, perform
the following steps:
1. Use SSH to log in to the Fabric Interconnect.
2. Run the connect nxos command to connect to the Fabric Interconnect.
# connect nxos
Cisco Nexus Operating System (NX-OS) Software
TAC support: http://www.cisco.com/tac
Copyright (c) 2002-2013, Cisco Systems, Inc. All rights reserved.
...
(nxos)#
3. Run the show zoneset active command and verify that its output reports
the fiber channel ID (FCID) for all world wide port names (WWPNs) and
hosts, as shown for Fabric Interconnect A in the following example.
Note: In the following output, the FCIDs and WWPNs are examples only and
are different in each deployment. Also, the term pwwn in the output refers
to WWPNs.
hostname-A(nxos)# show zoneset active
zone name ucs_hostname_A_10_GMS1_vHBA-A vsan 3010
* fcid 0x100005 [pwwn 20:00:00:05:ad:1e:11:1f]
* fcid 0x1000ef [pwwn 50:06:01:60:3e:a0:28:d2]
* fcid 0x1001ef [pwwn 50:06:01:69:3e:a0:28:d2]
4. Repeat Steps 2 and 3 for each Fabric Interconnect. The following sample
show zoneset active command is for Fabric Interconnect B.
hostname-B(nxos)# show zoneset active
zone name ucs_hostname_B_22_GMS1_vHBA-B vsan 3020
* fcid 0x420001 [pwwn 20:00:00:05:ad:1e:11:4e]
* fcid 0x4200ef [pwwn 50:06:01:61:3e:a0:28:d2]
* fcid 0x4201ef [pwwn 50:06:01:68:3e:a0:28:d2]
86
Copyright © 2014, Cisco Systems, Inc.
MURAL Installation Guide for Starter Pack
5. Log in to the UCS Manager on the Fabric Interconnect and navigate to
Servers > Service Profiles > root > Service Profile profile-name.
Access the FC Zones tab and in each FC Target row, verify that the
WWPNs in the Name and Target WWPN fields are the same.
Proceed to "Allocating Storage for the Master GCU Blade" on page 89.
87
Allocating Storage for the Master GCU Blade
This topic describes how to allocate storage on the EMC hardware for the master
GCU blade. The EMC storage arrays host the Hadoop distributed file system
(HDFS) connected to the Collector and Compute nodes and the columnar database
used by the Insta nodes.
There are three component procedures:
1. Registering the master GCU blade's world wide node name (WWNN) and
world wide port name (WWPN) with EMC. As the term suggests, a WWNN is
a unique identifier assigned to a server in the fibre channel fabric, in this
case a MURAL blade. The WWPN is similarly a unique identifier assigned to
the fibre channel port on the blade. See "Registering the Master GCU Blade
with EMC" on page 91.
2. Defining RAID groups and associating one with the logical unit number
(LUN) of the master GMS node. See "Creating RAID Groups and the LUN for
the Master GMS Node" on page 94.
3. Defining the storage group for the master GCU blade. See "Creating the
Storage Group for the Master GCU Blade" on page 100.
EMC Hardware Installation Prerequisites
Before beginning, verify that the following EMC hardware installation tasks are
completed:
l
The EMC VNX chassis and standby power supply (SPS) chassis are installed
in the rack according to the instructions in the EMC Unisphere installation
guide (EMC P/N 300-012-924) included with the hardware.
l
The SPS is connected to the storage processor (SP) management ports
according to the instructions in the EMC Unisphere installation guide, using
the cables provided with the product.
l
Power cords are connected for the following components according to the
instructions provided in the EMC Unisphere installation guide.
89
MURAL Installation Guide for Starter Pack
l
o
From SPS A and SPS B to SP A and SP B
o
From SPS A and SPS B to power distribution units (PDUs)
The Fibre Channel SFP+ transceiver, included with the hardware, is
installed in ports 4 and 5 of both SP A and SP B.
Note: Do not attach the cables between the storage system and the server array
until after initialization is complete.
In the following table, record the values provided in your Customer Information
Questionnaire (CIQ) for the indicated items.
Item
Value
SP A management port IP
SP B management port IP
Subnet mask and gateway for above
Admin name/password
Storage system serial number
Scope
DNS server address (optional)
Time server address
Inbound email address
Note: The following IP addresses cannot be used: 128.121.1.56 through
128.121.1.248, 192.168.1.1, and 192.168.1.2.
Configuring IP Addresses for the EMC System
The default IP addresses for the EMC system are 1.1.1.1 and 1.1.1.2. Perform
the following steps to configure the IP address of your laptop to a value on the
same subnet, connect to 1.1.1.1 using a web browser, and set IP addresses:
1. Configure your laptop’s IP address to 1.1.1.4/24.
2. Connect a cable to Service Processor A.
3. Use a web browser to access http://1.1.1.1/setup.
90
Copyright © 2014, Cisco Systems, Inc.
MURAL Installation Guide for Starter Pack
4. Reconfigure the IP addresses for the EMC system to the range specified in
the CIQ.
Note: If you need to restart EMC manually during the set-up procedure, use a
web browser to access http://1.1.1.1/setup, log in as admin, and select the
restart option.
Registering the Master GCU Blade with EMC
Register the master GCU blade's WWNN and WWPN—as configured in the Cisco
Unified Computing System (UCS)—in the EMC Unisphere interface. If EMC storage
has been previously configured with MURAL, you still need to verify that the blade
is registered correctly.
Note: Alerts might be generated during the process indicating that blades are not
registered (Alert 0x721c). You can ignore them until provisioning is complete,
after which point they need to be investigated.
To register the master GCU blade with EMC:
1. Log in to the Cisco Unified Computing System (UCS) Manager. Click the
SAN tab at the top of the UCS left-hand navigation bar, then navigate to
Pools > root > WWNN Pools > WWNN Pool pool-name, where poolname is the pool name assigned by the UCS configuration script that ran in
"Configuring UCS for MURAL" on page 19 (it is wwnn-pool1 in the
following figure).
2. Click the Initiators tab in the main region of the window. In the Assigned
To column of the table, locate the rows for GCU-1. The WWNNs appear in
the Name column of those rows. The following figure depicts Chassis 1 in a
dual-chassis deployment.
91
MURAL Installation Guide for Starter Pack
In the following table, record the WWNNs for the master GCU blade (GCU-1
in the figure). In practice, the final two digits of a WWNN or WWPN are
usually sufficient to distinguish one blade from another as you verify their
registration in EMC, but it is best to record the entire WWNN.
Blade
WWNNs
WWPNs
GCU-1
3. Navigate to Pools > root > WWPN Pools > WWPN Pool pool-name
(just below the current location), and repeat Step 2, recording the master
GCU blade's WWPNs in the table.
92
Copyright © 2014, Cisco Systems, Inc.
MURAL Installation Guide for Starter Pack
4. Log in to the EMC Unisphere interface. Mouse over the Hosts icon in the
title bar and select Connectivity Status. A window pops up as shown in
the following figure.
The value in the Initiator Name column is the world wide name (WWN)
recorded in EMC for a blade. It is hard-coded into every FC host bus adapter
(HBA), and consists of 16 hexadecimal digits separated by colons. The first
8 digits match the WWNN you recorded in Step 2 and the last 8 digits match
the WWPN you recorded in Step 3. In other words, the format of the WWN
is WWNN:WWPN.
5. Select a row and lick the Register button as shown in the preceding figure.
The Register Initiator Record window pops up with the WWN in the
WWN/IQN field. Perform the following steps:
a. Verify that the value in the Initiator Type field is SGI, selecting it
on the drop-down menu if necessary.
b. Verify that the value in the Failover Mode field is ALUA–mode 4
(the default as shown in the figure), selecting it on the drop-down
menu if necessary.
c. Click the New Host radio button if it not already selected.
d. Enter the master GCU blade's hostname and IP address in the
93
MURAL Installation Guide for Starter Pack
Host Name and IP Address fields.
e. Click the OK button.
6. Mouse over the Dashboard icon in the EMC Unisphere title bar and select
Hosts. Verify that the master GCU blade is listed with its IP address.
Creating RAID Groups and the LUN for the Master GMS Node
Next create RAID groups, define the LUN of the master GMS node, and associate
a RAID group with the LUN.
The following table specifies the parameters to use when creating RAID groups.
For RAID groups 5 and 10, the values in the Disks column are examples only;
consult with Technical Support about the RAID groups to assign to the disks in
your deployment. Disks 0, 1, 2, and 3 need to have an unbound configuration.
94
Copyright © 2014, Cisco Systems, Inc.
MURAL Installation Guide for Starter Pack
RAID Group
Storage Pool ID
RAID Configuration
Disks
5
5
5
4, 5, 6, 7, 8, 9
10
10
10
10, 11,12, 13
100 (not used)
100
Unbound
0, 1, 2, 3
The following table specifies the parameters to use when creating LUNs. Although
at this point you are creating only the LUN for the master GMS node, before doing
so it is important to review the table and verify that disks of the required sizes
have been allocated for all nodes. Keep in mind that the Collector, GMS, and
UI/Caching nodes each have their own LUNs but are hosted on the same blade:
COL-1, GMS-1, and UI-1 on the GCU-1 blade and COL-2, GMS-2, and UI-2 on the
GCU-2 blade. (The remaining LUNs are created in "Allocating Storage for the
Remaining Blades" on page 119.)
Note: Contact Technical Support now to consult about the following issues:
l
The appropriate disk sizes depend on the throughput capacity required by
your deployment. Do not simply use the sizes in the Disk Size (GB)
column, which are examples only.
l
The 50 gigabytes (GB) specified for the Insta-1-PGSQL and Insta-2-PGSQL
disks is the minimum size in a production environment. The size for a lab
environment might be different.
RAID
10
RAID Group
Name
RAID Group 10
Disk
LUN Name
LUN ID
Size
Controller
(GB)
INSTA-1
0
1945
FAB-A
Storage
Pool
Host - MAP
INSTA-STR-1 INSTA NODE1
10
RAID Group 10 INSTA-1-PGSQL
1
50
FAB-A
INSTA-STR-2 INSTA NODE1
10
RAID Group 10
INSTA-2
2
1945
FAB-B
INSTA-STR-1 INSTA NODE2
10
RAID Group 10 INSTA-2-PGSQL
3
50
FAB-B
INSTA-STR-3 INSTA NODE2
5
RAID Group 5
COL-1
4
1024
FAB-A
COL-STR-1
COL NODE-1
5
RAID Group 5
COL-2
5
1024
FAB-B
COL-STR-1
COL NODE-2
5
RAID Group 5
COMP-1
6
1024
FAB-A
COMP-STR-1 COMP NODE1
95
MURAL Installation Guide for Starter Pack
RAID
5
RAID Group
Name
RAID Group 5
Disk
LUN Name
LUN ID
Size
Controller
(GB)
COMP-2
7
1024
FAB-B
Storage
Pool
Host - MAP
COMP-STR-2 COMP NODE2
5
RAID Group 5
COMP-3
8
1024
FAB-A
COMP-STR-3 COMP NODE3
5
RAID Group 5
COMP-4
9
1024
FAB-B
COMP-STR-4 COMP NODE4
5
RAID Group 5
UI-1
10
1024
FAB-A
UI-STR-1
UI NODE-1
5
RAID Group 5
UI-2
11
1024
FAB-B
UI-STR-2
UI NODE-2
5
RAID Group 5
GMS-1
12
50
FAB-A
GMS-STR-1 GMS NODE-1
5
RAID Group 5
GMS-2
13
50
FAB-B
GMS-STR-2 GMS NODE-2
To create RAID groups and create and assign the LUN for the master GCU node,
perform the following steps:
1. In the EMC Unisphere interface, mouse over the Storage icon in the title
bar and select Storage Pools. Open the RAID Groups tab and click
Create as shown in the figure.
2. In the Create Storage Pool window that pops up, create RAID group 5
with the parameters specified in the Storage Pool ID and
RAID Configuration columns of the following table (which is the same as
in the introduction, reproduced here for your convenience). As mentioned,
96
Copyright © 2014, Cisco Systems, Inc.
MURAL Installation Guide for Starter Pack
the values in the Disks column are examples only; consult with Technical
Support about the RAID groups to assign to the disks in your deployment.
RAID Group
Storage Pool ID
RAID Configuration
Disks
5
5
5
4, 5, 6, 7, 8, 9
10
10
10
10, 11,12, 13
100 (not used)
100
Unbound
0, 1, 2, 3
Repeat the following steps for each of the three RAID groups:
a. In the Storage Pool Type field, click the RAID Group radio button
if it is not already selected.
b. In the Storage Pool ID field, type the value in that column of the
preceding table.
c. In the RAID Configuration field, select from the drop-down menu
the value from that column of the preceding table.
d. Click the Manual radio button if it is not already selected, then click
the Select... button.
e. In the Disk Selection window that pops up, move the disks specified
in the Disks column of the preceding table from the
Available Disks box to the Selected Disks box.
f. Click the OK button in the Disk Selection window.
97
MURAL Installation Guide for Starter Pack
3. Repeat Step 2 for RAID group 10 and RAID group 100.
4. Click the OK button in the Create Storage Pool window.
5. Navigate to Storage > LUNs > LUNs and click Create. Perform the
following steps in the Create LUN window that pops up, referring to the
values for GMS-1 in the preceding table of LUN parameters:
a. In the Storage Pool Type field, click the RAID Group radio
button if it is not already selected.
b. In the RAID Type field, select RAID5: Distributed Parity (High
Throughput) from the drop-down menu.
c. In the Storage Pool for new LUN field, select 5 from the dropdown menu.
d. In the User Capacity field, select from the drop-down menu the
value (in GB) closest to that provided by Technical Support for the
Disk Size field.
e. In the LUN ID field, select from the drop-down menu the number
98
Copyright © 2014, Cisco Systems, Inc.
MURAL Installation Guide for Starter Pack
in the LUN ID column of the table (in the sample table, 12).
f. In the LUN Name region, click the Name radio button if it is not
already selected, and type GMS-1 in the box.
6. Click Apply.
7. On the LUNs tab, verify that the parameters for
match the values
specified in the previous step, as shown in the following figure.
99
MURAL Installation Guide for Starter Pack
Creating the Storage Group for the Master GCU Blade
To create the storage group for the master GCU blade, perform the following
steps:
1. In the EMC Unisphere interface, mouse over the Hosts icon in the title bar,
and select Storage Groups. Click Create. A Create Storage
Group window similar to the following pops up.
The value in the Storage Group Name field is generated automatically,
the final number being the next one available in the sequence of assigned
numbers. We recommend inserting the node name in front of the
autogenerated value to make the storage group easier to identify in future.
In the example, the recommended value is GCU-1 Storage Group 8.
2. Click the OK button.
100
Copyright © 2014, Cisco Systems, Inc.
MURAL Installation Guide for Starter Pack
3. In the Storage Group Properties window that pops up, open the Hosts
tab and move GCU-1 from the Available Hosts column to the
Hosts to be Connected column.
4. Click the OK button, then the Apply button.
101
MURAL Installation Guide for Starter Pack
5. Verify that the list of storage groups for the master GCU blade is similar to
the following example.
Adjusting Caching on the EMC
You must adjust the caching settings on the EMC for your MURAL setup to keep
data in memory for the correct amount of time.
To adjust caching, perform the following steps:
1. In the EMC Unisphere interface, navigate to System Management >
Manage Cache.
2. On the Storage System Properties window that pops up, open the SP
Cache tab. Click the SP A Read Cache, SP B Read Cache, and
SP Write Cache check boxes to remove the checks and disable caching.
102
Copyright © 2014, Cisco Systems, Inc.
MURAL Installation Guide for Starter Pack
Click the OK button.
103
MURAL Installation Guide for Starter Pack
3. Open the SP Memory tab, and use the sliders in the User Customizable
Partitions region to set all three values to 1152. Click the OK button.
104
Copyright © 2014, Cisco Systems, Inc.
MURAL Installation Guide for Starter Pack
4. Return to the SP Cache tab and re-enable caching by clicking the
SP A Read Cache, SP B Read Cache, and SP Write Cache check boxes
to replace the checks. Click the OK button.
EMC is now configured for the master GCU blade. Proceed to "Configuring and
Starting the Master GCU Node" on the next page.
105
MURAL Installation Guide for Starter Pack
Configuring and Starting the Master GCU Node
Working on the master GCU blade, format the logical unit number (LUN) for the
master General Management System (GMS) node, start the GMS server process,
and load the mural.xml configuration file.
1. Log in to the console of the master GCU node as admin.
2. Download MURAL software patches from the FTP server to the /data
directory on the master GCU blade. Apply patches the node as specified in
the Release Notes for the current software version (3.3 or later).
3. Enter configure mode and set the admin password. We recommend the
value admin@123, which is the standard value that Technical Support
expects to use when they log in to a blade to help you with a problem.
> en
# conf t
(config)# username admin password admin@123
4. Install the license.
(config)# license install LK2-RESTRICTED_CMDS-88A4-FNLG-XCAU-U
(config)# write memory
5. Run the tps multipath show command to display the world wide
identifiers (WWIDs) of the logical unit numbers (LUNs) for the nodes that
run on the master GCU blade (Collector, GMS, and UI/Caching).
The entry for each node begins with the word mpathx, where mpathx is a
lowercase letter—in the example, mpathc, mpathb, and mpatha. On the next
line in each entry, the size parameter specifies the amount of storage
allocated to the node.
Note the WWID of the LUN for the GMS node, which always has significantly
less allocated storage than the other two nodes. In the following example,
mpathc and mpathc each have a terabyte (size=1.0T) of allocated storage,
whereas mpathb has 100 gigabytes (size=100G). Thus mpathb is the entry
106
Copyright © 2014, Cisco Systems, Inc.
MURAL Installation Guide for Starter Pack
for the GMS node, and the WWID of the associated LUN is
360060160850036009ce6ff385bb3e311.
> en
# conf t
(config) # tps multipath show
mpathc (36006016085003600c469fbc85bb3e311) dm-2 SGI,RAID 5
size=1.0T features='0' hwhandler='0' wp=rw
|-+- policy='round-robin 0' prio=1 status=active
| `- 1:0:0:2 sdd 8:48
active ready
running
`-+- policy='round-robin 0' prio=1 status=enabled
`- 2:0:0:2 sdh 8:112 active ready
running
mpathb (360060160850036009ce6ff385bb3e311) dm-1 SGI,RAID 5
size=100G features='0' hwhandler='0' wp=rw
|-+- policy='round-robin 0' prio=1 status=active
| `- 1:0:0:1 sdc 8:32
active ready
running
`-+- policy='round-robin 0' prio=1 status=enabled
`- 2:0:0:1 sdg 8:96
active ready
running
mpatha (36006016085003600fad5dbb1ce6be311) dm-0 SGI,RAID 5
size=1.0T features='0' hwhandler='0' wp=rw
|-+- policy='round-robin 0' prio=1 status=active
| `- 1:0:0:0 sdb 8:16
active ready
running
`-+- policy='round-robin 0' prio=1 status=enabled
`- 2:0:0:0 sdf 8:80
active ready
running
If output from the tps multipath show command does not appear, or does
not include WWIDs as shown in the example, reboot the master GCU blade,
log in again as admin, and repeat this step.
6. Format the LUN for the master GMS node. For the GMS-WWID variable,
substitute the WWID you noted in Step 2
(360060160850036009ce6ff385bb3e311 in the example). The storage for
the GMS node is mounted at /data/pgsql , which is why it appears in two
commands.
(config) # tps fs format wwid GMS-WWID
(config) # pmx register gns_rsync
registered process (gns_rsync)
107
MURAL Installation Guide for Starter Pack
(config) # tps fs fs1 device-name false
(config) # tps fs fs1
(config) # tps fs fs1 enable
(config) # tps fs fs1 mode async
(config) # tps fs fs1 mount-option mount-always
(config) # tps fs fs1 mount-point /data/pgsql
(config) # tps fs fs1 wwid GMS-WWID
(config) # write memory
(config) # _shell
# df -h |grep -i "/data/pgsql"
/dev/mapper/mpathbp1
99G
654M
93G
1% /data/pgsql
7. Start the GMS server process.
# cli -m config
(config) # write memory
(config) # pmx register pgsql
registered process (pgsql)
(config) # write memory
(config) # pm process tps restart
(config) # pm process gms_server launch auto
(config) # pm process gms_server restart
(config) # write memory
8. Verify that the GMS server process is running.
(config) # _shell
(config) # cli -t "en" "config t" "show pm process gms_server" | grep
"Current status"
Current status: running
9. Using a web browser, launch the GMS application at
http://Master-GCU-IP-Address/configure.
10. Log in as admin using the password you set in Step 3 (the recommended
password for admin is admin@123).
108
Copyright © 2014, Cisco Systems, Inc.
MURAL Installation Guide for Starter Pack
11. In the Java applet window that pops up, log in as admin with the same
password.
12. On the screen depicted in the following figure, click the file navigation
button (...) to the right of the top field, navigate to the mural.xml
configuration file stored on the laptop (you saved the validated file in
"Validating the Configuration" on page 79), and click the Load Config File
button.
109
MURAL Installation Guide for Starter Pack
13. In a separate terminal window, log in to the EMC Unisphere interface at
http://IP-Address-EMC.
14. Mouse over the Storage icon in the title bar and select LUNs.
15. Select the LUN for the master GMS node (GMS-1 or equivalent) and click
Properties to display the unique ID. To derive the WWID, remove the
separators (:) from the unique ID, prefix it with 3, and change all capital
letters to lowercase letters. A sample WWID is
360060160c7102f004887f4015d49e211.
16. Repeat Step 15 for the Collector and UI/Caching nodes on the master GCU
node.
17. In the GMS interface, open the Server Details tab.
18. In the Chassis box, select the chassis.
110
Copyright © 2014, Cisco Systems, Inc.
MURAL Installation Guide for Starter Pack
19. Select the GCU-1 blade in the Slot box, and in the Storage WWID(s) box,
verify that the values in the Storage_WWID column match the values you
learned in "Configuring and Starting the Master GCU Node" on page 106 If a
WWID does not match, click the Add button to the right of the
Storage WWID(s) box to create a new entry and set the correct WWID.
Select the incorrect entry and click the Delete button.
20. For the other slots (and chassis, for a dual-chassis deployment), verify that
the value in the Storage_WWID column on each line in the
Storage WWID(s) box has total of 17 numerals and lowercase letters,
begins with 3, and is unique across all blades. If a WWID is not compliant,
click the Add button to the right of the Storage WWID(s) box to create a
new entry with a compliant WWID. Select the incorrect entry and click the
Delete button.
If the current placeholder WWIDs are not compliant, you can derive new
ones based on the WWID for the master GMS node's LUN, by changing the
111
MURAL Installation Guide for Starter Pack
last three digits. For example, if the WWID for the master GMS node's LUN
is 360060160c7102f004887f4015d49e211, change 211 to 212 in the
first placeholder WWID (360...49e212), to 213 in the second placeholder
WWID (360...49e213), and so on.
21. At the bottom of the GMS window, type mural.xml in the box as shown in
the figure, and click the Save Server File button to validate and save the
configuration file on the master GCU node.
Note: The Save Server File button is not operational until all validation
errors are corrected. To save the file without correcting all errors and
completing the validation, click the Save Local File button instead.
22. Use SSH to log in to the management IP address on the master GCU node.
23. Activate the mural.xml configuration file which you saved locally in
Step 21.
(config) # gms config mural.xml activate
File successfully activated
Runtime configuration process started for all nodes. Please
check its status through command gms config-runtime show
status all
(config) # gms config-runtime show status all
Node
Status
collector-1:
UNINIT
insta-1:
UNINIT
compute-1:
UNINIT
collector-2:
UNINIT
insta-2:
UNINIT
compute-2:
UNINIT
24. Install the appliance for the master GCU node.
112
Copyright © 2014, Cisco Systems, Inc.
MURAL Installation Guide for Starter Pack
(config) # install appliance cluster cluster-name GCU-CLUS
node master-gcu-node-name
Installation in progress, check /data/gms/logs/
node-name_cmc.log file for more details
(config) # install appliance show installation-progress
node master-gcu-node-name
(config) # install appliance show installation-status cluster
GCU-CLUS node master-gcu-node-name
master-gcu-node-name : Node succesfully installed
Proceed to "Manufacturing the Remaining Blades" on page 115.
113
Manufacturing the Remaining Blades
To use GMS Lite and Pre-eXecution Environment (PXE) boot to manufacture all
nodes other than the master GCU node, perform the following steps:
1. Using SSH, log in to the master GCU node as admin.
2. Fetch and mount the MURAL ISO image file. In the image fetch command,
substitute the IP address of the laptop and the directory on the laptop to
which you downloaded the image in Step 1 of "Manufacturing the Master
GCU Blade" on page 81. A trace similar to the one shown after the
image mount command indicates that the mount operation is succeeding.
(config) # image fetch scp://GMS-Lite-server-IP-address/
path-on-GMS-Lite-server/
mfgcd-guavus-x86_64-20140315-133212.iso
(config)# image mount mfgcd-guavus-x86_64-20140315-133212.iso
Copying linux...
Copying rootflop.img...
Copying image.img...
3. Verify that the image file has been copied to the standard directory on the
master GCU blade:
# ls /var/opt/tms/images
mfgcd-guavus-x86_64-20143015-133212.iso
4. Complete the following steps in Cisco UCS - KVM Launch Manager to copy
the image file to a blade and reboot it.
a. Open the KVM console of the blade.
b. Access the KVM tab and select Ctrl-Alt-Del from the
Macros > Static Macros drop-down menu to reboot the blade. You
can also click Reset from the top of the console window.
c. After the prompt, press the F12 key as soon as possible to boot from
the network.
d. Repeat steps a through c for all blades, except the master GCU blade.
115
MURAL Installation Guide for Starter Pack
The following figure shows the results of pressing the F12 key.
Once the blades start booting from the network, the master GCU node pushes the
image to all the blades using PXE boot, and the manufacture process starts on
each blade in parallel.
A blade takes approximately 20 minutes to manufacture with the new image.
Wait until the last blade for which PXE boot was issued has been manufactured. A
login prompt is displayed once the image has been manufactured on a blade.
Verify Zoning/FLOGI on the Fabric Interconnects
To verify zoning and the fabric login (FLOGI) on the Fabric Interconnects, perform
the following steps:
1. Use SSH to log in to the Fabric Interconnect.
2. Run the connect nxos A command to connect to the Fabric Interconnect.
3. Run the show zoneset active command and verify that its output reports
the fiber channel ID (FCID) for all world wide port names (WWPNs) and
hosts, as shown for Fabric Interconnect A and Fabric Interconnect B in the
116
Copyright © 2014, Cisco Systems, Inc.
MURAL Installation Guide for Starter Pack
following example.
Note: In the following output and figures, the identifiers are examples only
and are different in your deployment. Also, the term pwwn in the output
refers to WWPNs.
hostname-A(nxos)# show zoneset active
zoneset name hostname-vsan-3010-zoneset vsan 3010
zone name hostname_A_12_UI1_vHBA-A vsan 3010
* fcid 0x100003 [pwwn 20:00:00:05:ad:1e:11:df]
* fcid 0x1000ef [pwwn 50:06:01:60:3e:a0:28:d2]
* fcid 0x1001ef [pwwn 50:06:01:69:3e:a0:28:d2]
zone name hostname_A_11_UI2_vHBA-A vsan 3010
* fcid 0x100006 [pwwn 20:00:00:05:ad:1e:11:ff]
* fcid 0x1000ef [pwwn 50:06:01:60:3e:a0:28:d2]
* fcid 0x1001ef [pwwn 50:06:01:69:3e:a0:28:d2]
hostname-B(nxos)# show zoneset active
zoneset name hostname-vsan-3020-zoneset vsan 3020
zone name hostname_B_24_UI1_vHBA-B vsan 3020
* fcid 0x420007 [pwwn 20:00:00:05:ad:1e:11:2e]
* fcid 0x4200ef [pwwn 50:06:01:61:3e:a0:28:d2]
* fcid 0x4201ef [pwwn 50:06:01:68:3e:a0:28:d2]
zone name hostname_B_23_UI2_vHBA-B vsan 3020
* fcid 0x420009 [pwwn 20:00:00:05:ad:1e:11:5e]
* fcid 0x4200ef [pwwn 50:06:01:61:3e:a0:28:d2]
* fcid 0x4201ef [pwwn 50:06:01:68:3e:a0:28:d2]
117
MURAL Installation Guide for Starter Pack
4. Log in to the UCS Manager on the Fabric Interconnect. Click the Servers
tab at the top of the left-hand navigation pane and navigate to
Service Profiles > root > Service Profile profile-name (in the
following figure, the profile name is HS-ESX01). Open the FC Zones tab in
the mane pane. For each FC Target row, verify that the WWPNs in the
Name and Target WWPN fields are the same.
Proceed to "Allocating Storage for the Remaining Blades" on page 119.
118
Copyright © 2014, Cisco Systems, Inc.
Allocating Storage for the Remaining Blades
This topic describes how to allocate storage on EMC hardware for all blades other
than the master GCU blade (you allocated storage it in "Allocating Storage for the
Master GCU Blade" on page 89). The storage arrays host the Hadoop distributed
file system (HDFS) used by the Collector and Compute nodes and the columnar
database used by the Insta nodes.
Note: The instructions in this topic assume that you have already satisfied the
prerequisites listed in "EMC Hardware Installation Prerequisites" on page 89.
As for the process of allocating storage for the master GCU blade, there are three
component procedures:
1. Registering each blade's world wide node name (WWNN) and world wide
port name (WWPN) with EMC. As the term suggests, a WWNN is a unique
identifier assigned to a server in the fibre channel fabric, in this case a
MURAL blade. The WWPN is similarly a unique identifier assigned to the
fibre channel port on the blade. See "Registering the Remaining Blades with
EMC" below.
2. Defining RAID groups and associating them with the logical unit numbers
(LUNs) of all nodes other than the master GMS node. See "Creating LUNs
for the Remaining Nodes" on page 123.
3. Defining storage groups for all blades other than the master GCU blade.
See "Creating Storage Groups for the Remaining Blades" on page 127.
Registering the Remaining Blades with EMC
For all blades other than the master GCU blade, register the blade's WWNN and
WWPN—as configured in the Cisco Unified Computing System (UCS)—in the
EMC Unisphere interface.
Note: Alerts might be generated during the set-up process indicating that nodes
are not registered (Alert 0x721c). You can ignore them until provisioning is
complete, after which point they need to be investigated.
119
MURAL Installation Guide for Starter Pack
To register the remaining blades with EMC:
1. Log in to the Cisco Unified Computing System (UCS) Manager. Click the
SAN tab at the top of the UCS left-hand navigation bar, then navigate to
Pools > root > WWNN Pools > WWNN Pool pool-name, where poolname is the pool name assigned by the UCS configuration script that ran in
"Configuring UCS for MURAL" on page 19 (it is wwnn-pool1 in the
following figure).
2. Click the Initiators tab in the main region of the window. In the Assigned
To column of the table, locate the rows for the blades other than the master
GCU blade. The WWNNs appear in the Name column of those rows.
In the following table, record the WWNNs for the indicated blades. In
practice, the final two digits of a WWNN or WWPN are usually sufficient to
distinguish one blade from another as you verify their registration in EMC,
but it is best to record the entire WWNN.
120
Copyright © 2014, Cisco Systems, Inc.
MURAL Installation Guide for Starter Pack
Blade
WWNN
WWPN
Compute 1
Compute 2
Compute 3
Compute 4
GCU 2
Insta 1
Insta 2
3. Navigate to Pools > root > WWPN Pools > WWPN Pool pool-name
(just below the current location), and repeat Step 2, recording the blades'
WWPNs in the table.
121
MURAL Installation Guide for Starter Pack
4. Log in to the EMC Unisphere interface. Mouse over the Hosts icon in the
title bar and select Connectivity Status. A window pops up as shown in
the following figure.
The value in the Initiator Name column is the world wide name (WWN)
recorded in EMC for a blade. It is hard-coded into every FC host bus adapter
(HBA), and consists of 16 hexadecimal digits separated by colons. The first
8 digits match the WWNN you recorded in Step 2 and the last 8 digits match
the WWPN you recorded in Step 3. In other words, the format of the WWN
is WWNN:WWPN.
5. Select a row and click the Create button as shown in the preceding figure.
The Register Initiator Record window pops up with the WWN in the
WWN/IQN field. Perform the following steps:
a. Verify that the value in the Initiator Type field is SGI, selecting it
on the drop-down menu if necessary.
b. Verify that the value in the Failover Mode field is ALUA–mode 4
(the default as shown in the figure), selecting it on the drop-down
menu if necessary.
122
Copyright © 2014, Cisco Systems, Inc.
MURAL Installation Guide for Starter Pack
c. Enter the blade's hostname and IP address in the Host Name and
IP Address fields.
d. Click the OK button.
6. Repeat Step 5 for each blade.
7. Mouse over the Dashboard icon in the EMC Unisphere title bar and select
Hosts. Verify that each blade is listed with its IP address.
Creating LUNs for the Remaining Nodes
Next you create logical unit numbers (LUNs) for all nodes (other than the master
GMS node) and assign them to the appropriate RAID group. In "Allocating Storage
for the Master GCU Blade" on page 89 you consulted with Technical Support to
determine the appropriate values for the following table of LUN parameters. The
sample table and associated notes are reproduced here for your convenience.
123
MURAL Installation Guide for Starter Pack
The following table specifies the parameters to use when creating the LUNs. Use
the values determined in your previous consultation with Technical Support.
RAID
10
RAID Group
Name
RAID Group 10
Disk
LUN Name
LUN ID
Size
Controller
Storage
(GB)
INSTA-1
0
1945
FAB-A
Host - MAP
Pool
INSTA-STR-1 INSTA NODE1
10
RAID Group 10 INSTA-1-PGSQL
1
50
FAB-A
INSTA-STR-2 INSTA NODE1
10
RAID Group 10
INSTA-2
2
1945
FAB-B
INSTA-STR-1 INSTA NODE2
10
RAID Group 10 INSTA-2-PGSQL
3
50
FAB-B
INSTA-STR-3 INSTA NODE2
5
RAID Group 5
COL-1
4
1024
FAB-A
COL-STR-1
COL NODE-1
5
RAID Group 5
COL-2
5
1024
FAB-B
COL-STR-1
COL NODE-2
5
RAID Group 5
COMP-1
6
1024
FAB-A
COMP-STR-1 COMP NODE1
5
RAID Group 5
COMP-2
7
1024
FAB-B
COMP-STR-2 COMP NODE2
5
RAID Group 5
COMP-3
8
1024
FAB-A
COMP-STR-3 COMP NODE3
5
RAID Group 5
COMP-4
9
1024
FAB-B
COMP-STR-4 COMP NODE4
5
RAID Group 5
UI-1
10
1024
FAB-A
UI-STR-1
UI NODE-1
5
RAID Group 5
UI-2
11
1024
FAB-B
UI-STR-2
UI NODE-2
5
RAID Group 5
GMS-1
12
50
FAB-A
GMS-STR-1 GMS NODE-1
5
RAID Group 5
GMS-2
13
50
FAB-B
GMS-STR-2 GMS NODE-2
Note: You created RAID groups 5, 10, and 100 in "Creating RAID Groups and
the LUN for the Master GMS Node" on page 94. To verify that the settings for
them match the following table, mouse over the Storage icon in the
UCS Unisphere title bar and select Storage Pools. Open the RAID Groups
tab. If the values do not match, repeat Steps 1 through 4 of "Creating RAID
Groups and the LUN for the Master GMS Node" on page 94 to correct them.
124
RAID Group
Storage Pool ID
RAID Configuration
Disks
5
5
5
4, 5, 6, 7, 8, 9
Copyright © 2014, Cisco Systems, Inc.
MURAL Installation Guide for Starter Pack
RAID Group
Storage Pool ID
RAID Configuration
Disks
10
10
10
10, 11,12, 13
100 (not used)
100
Unbound
0, 1, 2, 3
To create and assign the LUNs for all remaining nodes, perform the following
steps:
1. Navigate to Storage > LUNs > LUNs and click Create. The
Create LUN window pops up.
2. Referring to the values for the node in the preceding table of LUN
parameters, perform the following steps:
a. In the Storage Pool Type field, click the RAID Group radio
button if it is not already selected.
b. In the RAID Type field, select from the drop-down menu the value
that corresponds to the RAID column in the table, for example
RAID5: Distributed Parity (High Throughput) if the RAID
column is 5.
125
MURAL Installation Guide for Starter Pack
c. In the Storage Pool for new LUN field, select the appropriate
value from the drop-down menu.
d. In the User Capacity field, select from the drop-down menu the
value (in GB) closest to that provided by Technical Support for the
Disk Size field.
e. In the LUN ID field, select the value from the drop-down menu that
is specified in the LUN ID field in the table. LUN IDs autoincrement. Disks that are already assigned are not available.
f. In the LUN Name region, click the Name radio button and type the
value from the LUN Name column in the box.
3. Click Apply.
4. Repeat Steps 1 through 3 for each node.
For the Insta nodes, you must create a total of four LUNs, each in
RAID Group 10 (the other nodes are in RAID Group 5):
l
INSTA-1 to be associated with dbroot1 on both Insta Nodes
l
INSTA-2 to be associated with dbroot2 on both Insta Nodes
l
INSTA-1-PGSQL to be associated with INSTA-1 only
l
INSTA-2-PGSQL to be associated with INSTA-2 only
The INSTA-1-PGSQL and INSTA-2_PGSQL partitions are for the
Postgres database and store GMS-related data.
126
Copyright © 2014, Cisco Systems, Inc.
MURAL Installation Guide for Starter Pack
5. Navigate to Storage > LUNs to display the list of LUNs like the one in
the following figure. Verify that settings for each node's LUN match the
LUN parameters table.
Creating Storage Groups for the Remaining Blades
For all blades except Insta blades, there is one storage group for each blade. An
Insta blade requires two storage groups, each of which is associated with three
LUNs:
l
l
INSTA1-PGSQL
l
INSTA-1
l
INSTA-2
l
INSTA1-PGSQL
INSTA2-PGSQL
l
INSTA-1
l
INSTA-2
l
INSTA2-PGSQL
To create the required storage groups for all blades other than the master GCU
blade, perform the following steps. (You performed these steps for the master
blade in "Creating the Storage Group for the Master GCU Blade" on page 100.)
127
MURAL Installation Guide for Starter Pack
1. In the EMC Unisphere interface, mouse over the Hosts icon in the title bar,
and select Storage Groups. Click Create. A Create Storage
Group window similar to the following pops up.
The value in the Storage Group Name field is generated automatically,
the final number being the next one available in the sequence of assigned
numbers. We recommend inserting the node name in front of the
autogenerated value to make the storage group easier to identify in future.
For example, the recommended value for the COMP-2 node in this case is
COMP-2 Storage Group 12.
2. Click the OK button.
128
Copyright © 2014, Cisco Systems, Inc.
MURAL Installation Guide for Starter Pack
3. In the Storage Group Properties window that pops up, open the Hosts
tab and move the node from the Available Hosts column to the
Hosts to be Connected column.
4. Click the OK button, then the Apply button.
129
MURAL Installation Guide for Starter Pack
5. Verify that the list of storage groups for the nodes is similar to the following
example.
Adjusting Caching on the EMC
You must adjust the caching settings on the EMC for your MURAL setup to keep
data in memory for the correct amount of time.
To adjust caching, perform the following steps:
1. In the EMC Unisphere interface, navigate to System Management >
Manage Cache.
2. On the Storage System Properties window that pops up, open the SP
Cache tab. Click the SP A Read Cache, SP B Read Cache, and
SP Write Cache check boxes to remove the checks and disable caching.
130
Copyright © 2014, Cisco Systems, Inc.
MURAL Installation Guide for Starter Pack
Click the OK button.
131
MURAL Installation Guide for Starter Pack
3. Open the SP Memory tab, and use the sliders in the User Customizable
Partitions region to set all three values to 1152. Click the OK button.
4. Return to the SP Cache tab and re-enable caching by clicking the
SP A Read Cache, SP B Read Cache, and SP Write Cache check boxes
to replace the checks. Click the OK button.
132
Copyright © 2014, Cisco Systems, Inc.
MURAL Installation Guide for Starter Pack
EMC storage is now configured for all nodes.
Verifying WWIDs for All LUNs
Now that LUNs are configured for all nodes, you can verify that the world wide
identifier (WWID) for each LUN is correct in the MURAL configuration file,
mural.xml .
The number of LUNs, and therefore the number of WWIDs, for each node depends
on the type of node:
l
GCU nodes each have three LUNs, to enable the component nodes to access
the indicated type of storage:
133
MURAL Installation Guide for Starter Pack
l
GMS node, to access the PGSQL database of GMS and Zabbix data
l
Collector node, to access EDR and bulkstats data generated by ASRs
l
UI/Caching node, to access UI-generated reports
You already verified the WWIDs of the LUNs for these nodes in "Configuring
and Starting the Master GCU Node" on page 106.
l
Compute nodes each have one LUN.
l
Each Insta node (being clustered) has three WWIDs—one assigned to
dbroot1 of both Insta 1 and Insta 2, one assigned to dbroot2 of both Insta
1 and Insta 2, and one assigned to pgsql . The pgsql storage partitions are
for the Postgres database and store Rubix-related data; there is one pgsql
storage partition for each Insta node.
Note: Ensure that the same WWID is assigned to dbroot1 for both the
Insta 1 and Insta 2 nodes. Likewise, ensure that the same WWID is assigned
to dbroot2 for both Insta nodes.
Perform the following steps:
1. In the EMC Unisphere interface, mouse over the Storage icon in the title
bar and select LUNs.
2. Select a LUN and click Properties to display the unique ID. To derive the
WWID, remove the separators (:) from the unique ID, prefix it with 3, and
change all capital letters to lowercase letters. A sample WWID is
360060160c7102f004887f4015d49e211. Note the WWID for later
verification.
3. Repeat Step 2 for each node other than the component nodes of the master
GCU node (Collector 1, GMS 1, and UI/Caching 1), which you verified in
"Configuring and Starting the Master GCU Node" on page 106.
4. In a browser, access the following URL to open the GMS interface:
http://Master-GMS-Node-IP-Address/configure
5. Log in as admin (the recommended password is admin@123).
134
Copyright © 2014, Cisco Systems, Inc.
MURAL Installation Guide for Starter Pack
6. In the Java applet window that pops up, log in as admin with the same
password.
7. On the screen depicted in the following figure, click the file navigation
button (...) to the right of the top field, navigate to the mural.xml
configuration file, and click the Load Config File button.
8. Open the Server Details tab.
135
MURAL Installation Guide for Starter Pack
9. In the Chassis box, select the chassis.
136
Copyright © 2014, Cisco Systems, Inc.
MURAL Installation Guide for Starter Pack
10. Select the blade in the Slot box, and in the Storage WWID(s) box verify
that the value in the Storage_WWID column matches the value you
learned in Steps 2 and 3 for each node running on the blade. If it does not,
click the Add button to the right of the Storage WWID(s) box to create a
new entry and set the correct WWID. Select the incorrect entry and click
the Delete button.
11. Repeat Steps 9 and 10 to verify the WWIDs for all blades and chassis.
Proceed to "Configuring the Standby GCU Node" on page 139.
137
Configuring the Standby GCU Node
After storage is allocated for all nodes and you define the deployment topology
for the second time as described in "Defining the Deployment Configuration File
Using GMS Lite" on page 53, you complete the configuration of the standby GCU
node by performing the following procedures:
l
"Configuring the Password and License, and Applying Patches" below
l
"Completing the Installation on the Standby GCU Node" on page 142
Configuring the Password and License, and Applying Patches
To download software patches, set the administrator password, and install the
license, perform the following steps:
1. Log in as admin via the console on the standby GCU node.
2. Download MURAL software patches from the FTP server to the /data
directory on the GCU standby node. Apply all patches that apply to nodes
running the General Management System (GMS) software.
For a complete list of patches and instructions for applying them, see the
release notes for the current software version (3.3 or later).
3. Enter config mode and set the password for the admin user. We
recommend setting the password to admin@123, which is the standard
value that Technical Support expects to use when they log in to a blade to
help you with a problem.
> en
# conf t
(config)# username admin password admin@123
4. Install the license.
(config)# license install LK2-RESTRICTED_CMDS-88A4-FNLG-XCAU-U
(config)# write memory
5. Run the tps multipath show command to display the world wide
139
MURAL Installation Guide for Starter Pack
identifiers (WWIDs) of the logical unit numbers (LUNs) for the nodes that
run on the standby GCU blade (Collector, GMS, and UI/Caching). The
instructions are the same as the ones you used for the master GCU blade in
"Configuring and Starting the Master GCU Node" on page 106, but they are
repeated here for your convenience.
The entry for each node begins with the word mpathx, where mpathx is a
lowercase letter—in the example, mpathc, mpathb, and mpatha. On the next
line in each entry, the size parameter specifies the amount of storage
allocated to the node.
Note the WWID of the LUN for the GMS node, which always has significantly
less allocated storage than the other two nodes. In the following example,
mpathc and mpathc each have a terabyte (size=1.0T) of allocated storage,
whereas mpathb has 100 gigabytes (size=100G). Thus mpathb is the entry
for the GMS node, and the WWID of the associated LUN is
360060160850036009ce6ff385bb3e311.
> en
# conf t
(config) # tps multipath show
mpathc (36006016085003600c469fbc85bb3e311) dm-2 SGI,RAID 5
size=1.0T features='0' hwhandler='0' wp=rw
|-+- policy='round-robin 0' prio=1 status=active
| `- 1:0:0:2 sdd 8:48
active ready
running
`-+- policy='round-robin 0' prio=1 status=enabled
`- 2:0:0:2 sdh 8:112 active ready
running
mpathb (360060160850036009ce6ff385bb3e311) dm-1 SGI,RAID 5
size=100G features='0' hwhandler='0' wp=rw
|-+- policy='round-robin 0' prio=1 status=active
| `- 1:0:0:1 sdc 8:32
active ready
running
`-+- policy='round-robin 0' prio=1 status=enabled
`- 2:0:0:1 sdg 8:96
active ready
running
mpatha (36006016085003600fad5dbb1ce6be311) dm-0 SGI,RAID 5
size=1.0T features='0' hwhandler='0' wp=rw
|-+- policy='round-robin 0' prio=1 status=active
| `- 1:0:0:0 sdb 8:16
140
active ready
running
Copyright © 2014, Cisco Systems, Inc.
MURAL Installation Guide for Starter Pack
`-+- policy='round-robin 0' prio=1 status=enabled
`- 2:0:0:0 sdf 8:80
active ready
running
If output from the tps multipath show command does not appear, or does
not include WWIDs as shown in the example, reboot the standby GCU blade,
log in again as admin, and repeat this step.
6. At the bottom of the GMS window, type mural.xml in the box as shown in
the figure, and click the Save Server File button to validate and save the
configuration file on the standby GCU node.
Note: The Save Server File button is not operational until all validation
errors are corrected. To save the file without correcting all errors and
completing the validation, click the Save Local File button instead.
7. Activate the mural.xml configuration file.
(config) # gms config mural.xml activate
File successfully activated
Runtime configuration process started for all nodes. Please
check its status through command gms config-runtime show
status all
(config) # gms config-runtime show status all
Node
Status
collector-1:
UNINIT
insta-1:
UNINIT
compute-1:
UNINIT
collector-2:
UNINIT
insta-2:
UNINIT
compute-2:
UNINIT
141
MURAL Installation Guide for Starter Pack
Completing the Installation on the Standby GCU Node
To complete the installation and configuration of the standby GCU node, perform
the following steps on the master GCU node:
1. Use SSH to log in to the management IP address on the master GCU node.
2. Enter config mode and complete the installation on the standby GCU node.
> en
# conf t
(config)# install appliance cluster cluster-name GCU-CLUS node standbygcu-node-name
3. Wait about 30 minutes, then run the following command to verify that
installation completed successfully.
(config)# install appliance show installation-status cluster GCU-CLUS
master-gcu-node-name : Node successfully installed
standby-gcu-node-name : Node successfully installed
4. Wait another five to ten minutes for the master and standby GCU nodes to
synchronize (for the rsync procedure to complete).
You can now access the GMS UI through the virtual IP address of the GCU cluster
(http://GCU-Cluster-VIP-Address/configure).
Proceed to "Installing MURAL on the Remaining Blades" on page 143.
142
Copyright © 2014, Cisco Systems, Inc.
Installing MURAL on the Remaining Blades
In "Defining the Deployment Configuration File Using GMS Lite" on page 53, you
verified the configuration settings for your deployment in the mural.xml file.
Now you load this file onto each blade.
Before You Begin
Before loading the configuration file onto the blades, apply MURAL software
patches on all blades as directed in the Release Notes for software version 3.3 or
later.
Installing MURAL Software on the Remaining Blades
To complete the installation of MURAL software on the remaining blades, perform
the following steps:
1. Use SSH to log in to the management IP address on the master GCU node.
2. Complete the installation on the Insta nodes.
(config) # install appliance cluster cluster-name INSTA-CLUS
all force-format
Installation in progress, check insta-node-name-1_cmc.log file
for more details
Installation in progress, check insta-node-name-2_cmc.log file
for more details
3. Wait about an hour, then run the following command to verify that
installation completed successfully.
(config)# install appliance show installation-status cluster INSTA-CLUS
insta-node-name-1 : Node successfully installed
insta-node-name-2 : Node successfully installed
4. Complete the installation on the Compute nodes.
(config) # install appliance cluster cluster-name COMP-CLUS
all force-format
143
MURAL Installation Guide for Starter Pack
Installation in progress, check compute-node-name-1_cmc.log
file for more details
Installation in progress, check compute-node-name-2_cmc.log
file for more details
5. Wait about 20 minutes, then run the following command to verify that
installation completed successfully.
(config)# install appliance show installation-status cluster COMP-CLUS
compute-node-name-1 : Node successfully installed
compute-node-name-2 : Node successfully installed
Verifying the Installation is Complete
The installation process usually takes two to three hours to complete, but the
time varies by deployment. The install appliance show installation-status
all command produces output like the following when the installation on all nodes
is complete:
(config) # install appliance show installation-status all
master-gcu-node-name : Node successfully installed
insta-node-name-1 : Node successfully installed
compute-node-name-1 : Node successfully installed
standby-gcu-node-name : Node successfully installed
insta-node-name-2 : Node successfully installed
compute-node-name-2 : Node successfully installed
If the installation fails at any point, please contact Technical Support. Log files are
available in the /data/gms/logs directory on the master GCU node.
To verify successful completion of the installation, perform the following steps:
1. On the master GCU blade, run the hadoop dfsadmin -report command.
Output like the following confirms that the process is running. If it does not
appear within five minutes, enter configure mode and run the pm process
tps restart command, then repeat the hadoop command.
# hadoop dfsadmin -report
Configured Capacity: 2164508082176 (1.97 TB)
144
Copyright © 2014, Cisco Systems, Inc.
MURAL Installation Guide for Starter Pack
Present Capacity: 2054139189528 (1.87 TB)
DFS Remaining: 2054137925632 (1.87 TB)
DFS Used: 1263896 (1.21 MB)
DFS Used%: 0%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
------------------------------------------------Datanodes available: 2 (2 total, 0 dead)
Name: IP-Address-1:50010
Decommission Status : Normal
Configured Capacity: 1082254041088 (1007.93 GB)
DFS Used: 631948 (617.14 KB)
Non DFS Used: 55184448372 (51.39 GB)
DFS Remaining: 1027068960768(956.53 GB)
DFS Used%: 0%
DFS Remaining%: 94.9%
Last contact: timestamp
Name: IP-Address-2:50010
Decommission Status : Normal
Configured Capacity: 1082254041088 (1007.93 GB)
DFS Used: 631948 (617.14 KB)
Non DFS Used: 55184444276 (51.39 GB)
DFS Remaining: 1027068964864(956.53 GB)
DFS Used%: 0%
DFS Remaining%: 94.9%
Last contact: timestamp
# cli -m config
(config) #
pm process tps restart
(config> # _shell
145
MURAL Installation Guide for Starter Pack
2. Log in to the standby GCU blade and run the following commands to verify
that PGSQL is active.
> en
(config)# _shell
# ls -l /data/pgsql; ps -eaf | grep pgsql
total 24
drwxr-xr-x 3 admin
root
4096 timestamp 9.2
drwxr-xr-x 2 postgres postgres
4096 timestamp archivedir
drwx------ 2 admin
root
admin
0 11:37 ?
1405 12395
16384 timestamp lost+found
00:00:00 /bin/sh
-c pg_basebackup -U postgres -h 172.30.4.141
-D /data/pgsql/9.2/data_bb -U replicationuser -v -x
&> /var/log/pgsql/9.2/pgbb.log
admin
1406
1405
0 11:37 ?
00:00:00 pg_basebackup
-U postgres -h 172.30.4.141 -D /data/pgsql/9.2/data_bb
-U replicationuser -v -x
admin
4270
7658
0 11:38 pts/0
00:00:00 grep pgsql
3. Run the following commands on both the master and standby GCU nodes to
copy a deployment-specific configuration file.
> en
# _shell
# mkdir -p /data/CoreJob/config
# cd /data/CoreJob/config
# scp /opt/etc/oozie/CoreBaseJob/solutionConfig.json.atlas
solutionConfig.json
Tuning the System for Performance
To tune the performance of the system:
1. Log in to the master GCU node and run the following commands.
> en
# configure terminal
(config)# internal set modify /tps/process/hadoop/attribute/mapred.min.split.size/
value value string 268435456
146
Copyright © 2014, Cisco Systems, Inc.
MURAL Installation Guide for Starter Pack
(config)# internal set create - /tps/process/hadoop/attribute/
mapred.tasktracker.map.tasks.maximum/value value string 10
(config) # pm process tps restart
(config) # write memory
2. Repeat Step 1 on the standby GCU node.
Proceed to "Verifying that Processes are Running" on page 149.
147
Verifying that Processes are Running
After making any performance-related modifications, verify that the processes
are running.
Note: After you make any modifications, wait at least 15 minutes to ensure
processes have had a chance to restart before running the following commands.
1. Log in to the master GCU node and verify that all Compute nodes have
joined the HDFS cluster. The output shown in the following sample
command sequence indicates correct configuration.
> en
(config) # _shell
# ps -ef | grep hadoop | awk {'print $NF'}
org.apache.hadoop.hdfs.server.datanode.DataNode
org.apache.hadoop.hdfs.server.namenode.NameNode
hadoop
org.apache.hadoop.hdfs.server.namenode.NameNode
org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode
org.apache.hadoop.mapred.JobTracker
# hadoop dfsadmin -report 2>/dev/null | egrep
"available|Name|Status"
Datanodes available: 3 (3 total, 0 dead)
Name: 10.10.2.13:50010
Decommission Status : Normal
Name: 10.10.2.14:50010
Decommission Status : Normal
Name: 10.10.2.17:50010
Decommission Status : Normal
Note: These are internal IP addresses.
2. Run the following commands on the standby GCU node.
> en
# conf t
(config) # show pm process collector | grep status
Current status:
running
(config) # _shell
149
MURAL Installation Guide for Starter Pack
#
ps -ef | grep hadoop | awk {'print $NF'}
org.apache.hadoop.hdfs.server.datanode.DataNode
org.apache.hadoop.hdfs.server.namenode.NameNode
hadoop
# cli -t "en" "conf t" "show pm process collector" | grep status
Current status:
running
3. Log in to the master Insta node, and run the following commands.
> en
# conf t
(config) # show pm process insta
Current status:
running
(config) # insta infinidb get-status-info
Processes are running correctly if the output reports ACTIVE state for all
modules, and RUNNING status for all instances and Adapter.
4. Log in to the standby Insta node and repeat Step 3.
5. Run the following command to verify that Postgres is running on the master
Insta node.
> en
# _shell
# ps -ef |grep pgsql
postgres
2990
1
0 timestamp
/usr/pgsql-9.2/bin/postmaster -p
5432 -D /data/pgsql/9.2/data
6. Repeat Step 5 on the standby Insta node and both the master and standby
GCU nodes.
Proceed to "Generating and Pushing the Information Bases" on the facing page.
150
Copyright © 2014, Cisco Systems, Inc.
MURAL Installation Guide for Starter Pack
Generating and Pushing the Information Bases
To configure your system for the data specific for your environment, you must
update the information bases (IBs) and fetch them. In some cases, you might
need to also manually modify the IBs to add or reclassify entries.
Modifying IBs Manually
In addition to the necessary configurations done by the scripts, you might need to
do some manual modifications of IBs.
1. On the master GCU node, update the IBs with data from the software
image.
# conf t
(config)# pmx
pm extension> subshell bulkstats
pm extension(bulkstats)> update all ibs from image
pm extension(bulkstats)> quit
pm extension> subshell aggregation_center
pm extension(aggregation center)> update all ibs from image
2. Run the show ib deviceGroup.list command to confirm that it returns no
output (the command prompt returns immediately), which indicates that no
device groups are already defined.
pm extension(aggregation center)> show ib deviceGroup.list
pm extension (aggregation center)>
3. Run the indicated edit ib commands to add entries to the smartDG.list
file.
pm extension (aggregation center)> edit ib smartDG.list add Device
Group: SmartPhone
pm extension (aggregation center)> edit ib smartDG.list add Device
Group: Feature Phone
pm extension (aggregation center)> show ib smartDG.list
1 SmartPhone
2 Feature Phone
151
MURAL Installation Guide for Starter Pack
4. Run the following command to make changes for the core job.
# cd /opt/etc/scripts
#
./install_solution_config.sh
is tethering app enabled, enter (Y/N) : Y
is httperror app enabled, enter (Y/N) : Y
solutionConfig.json file is successfully copied
5. Using a text editor such as vi , create a file called
/data/work/serverFile_tethering to contain the details of the ASR
gateways.
Include an entry for each ASR gateway with the indicated parameters. The
delimiter between the items is ", " (comma followed by a space).
> en
# _shell
# cd /data
# mkdir work
# cd work
# vi /data/work/serverFile_tethering
ASR-IP-address, ASR-user-name, ASR-password, directory-on-ASR-fordatabase-copies
For example:
192.168.156.96, admin, admin@123, /data/TTYT
6. Repeat Steps 4 and 5 on the standby GCU node.
Configuring IBs for EDR
The following table shows a sample data set for setting up the IBs.
Note: The values in the preceding table are examples only; substitute values
appropriate for your deployment. For example, instead of General Packet Radio
Service (GPRS) you might use System Architecture Evolution (SAE) or Code
152
Copyright © 2014, Cisco Systems, Inc.
MURAL Installation Guide for Starter Pack
Division Multiple Access (cdma2000), in which case Gateway GPRS Support Node
(GGSN) becomes Packet Data Network (PDN) gateway (P-GW) or Home Agent
(HA) respectively. Similarly, Signaling GPRS Support Node (SGSN) becomes
serving gateway (S-GW) or High Rate Packet Data (HRPD) serving gateway (HSGW) in SAE, or Packet Data Serving Node (PDSN) in cdma2000.
In the sample table, GGSNIP is the management IP address of the GGSN and
SGSNIP is the management IP address of the SGSN.
To configure the IBs for EDR, perform the following steps:
1. Use SSH to log in to the master GCU node and enter the aggregation center
subshell.
> en
# conf t
(config) # pmx
pm extension> subshell aggregation_center
2. Run the add ib-destination command twice to define the IP address on
the Control (internal) Network for the master and standby GCU nodes.
pm extension (aggregation center)> add ib-destination IP-address
3. Add the GGSN IPs and GGSN names.
pm extension (aggregation center)> edit ib ipGgsn.map add
4. Add the SGSN IPs and SGSN names.
pm extension (aggregation center)> edit ib ipSgsn.map add
5. Add the Access Point Name (APN) and corresponding APN group.
pm extension (aggregation center)> edit ib apnGroup.map add
6. Add the Radio Access Type (RAT) ID and type.
pm extension (aggregation center)> edit ib ratidtype.map add
153
MURAL Installation Guide for Starter Pack
7. Default segments have only Low, Active, and High segments in the
configuration, as shown in the following example.
pm extension> subshell aggregation_center
pm extension (aggregation center)> show ib segment.map
1
[524288000][High]
2
[104857600][Active]
3
[0][Low]
Configuring Gateways For All IBs
Configure an entry for each gateway from which MURAL accepts transferred files.
The term gateway refers to an ASR, as does the term distribution center (DC)
used in some earlier implementations of Cisco software for the ASR. The gateway
or DC name serves as the unique identifier for a gateway configuration.
For the parameters to the add gateway command, supply values that meet the
following requirements.
l
For the schema_version parameter (which applies to bulkstats data),
specify an integer only (for example, 15 not 15.0).
l
For the edr-filename-pattern parameter:
o
The specified value must correspond to the one used by the ASR for
the files it generates. The value is specified on the ASR5K worksheet
in the Customer Information Questionnaire (CIQ) for the deployment.
o
The standard value is as follows:
*_MURAL-edr_*_%MM%DD%YYYY%hh%mm%ss.*
Include the final period and asterisk (.*) if the value in the CIQ ends
with an extension such as .gz or .txt.
The two instances of paired asterisk and underscore (*_) are
optional, but when they are included the ASR replaces the asterisk
with the following elements in the filenames it generates:
154
Copyright © 2014, Cisco Systems, Inc.
MURAL Installation Guide for Starter Pack
n
For the first asterisk, the ASR name. (If your deployment of
MURAL is based on a previous implementation, you might see
the string %DC instead of the asterisk in this position; the
result is the same.)
n
For the second asterisk, either flow for files of EDR flow data
or http for files of EDR Hypertext Transfer Protocol (HTTP)
data.
For the alphabetical variables
(%MM%DD%YYYY%hh%mm%ss), the ASR substitutes the
corresponding timestamp values (month, day, year, hours, minutes,
seconds); for example, 05012014060000 for 06:00 hours on May 1,
2014.
l
For the bulkstat-filename-pattern parameter:
o
The specified value must correspond to the one used by the ASR for
the files it generates. The value is specified on the ASR5K worksheet
in the Customer Information Questionnaire (CIQ) for the deployment.
o
The standard value is as follows:
*_%YYYY%MM%DD%hh%mm%ss.*
Include the final period and asterisk (.*) if the value in the CIQ ends
with an extension such as .gz or .txt.
The first asterisk is optional, but when it is included the ASR replaces
it with the ASR name.
For the alphabetical variables
(%YYYY%MM%DD%hh%mm%ss), the ASR substitutes the
corresponding timestamp values (year, month, day, hours, minutes,
seconds); for example, 20140501060000 for 06:00 hours on May 1,
2014.
l
For the edr-file-path parameter:
155
MURAL Installation Guide for Starter Pack
o
Start the value with a forward slash (/).
o
The value must be unique for each ASR. Consult with Technical
Support to learn the appropriate values for the ASRs in your
deployment.
o
If the ASR is configured in charging mode, the final element must be
/edr. If the ASR is configured in reporting mode, it must be /redr.
o
The specified value is the subdirectory under /data/collector to
which the ASR sends files. For example, if the specified value is
/NewYork/edr111/edr , the ASR writes files to
/data/collector/NewYork/edr111/edr .
l
For the bulkstat-file-path parameter:
o
Start the value with a forward slash (/).
o
The value must be unique for each ASR. Consult with Technical
Support to learn the appropriate values for the ASRs in your
deployment.
o
The specified value is the subdirectory under /data/collector to
which the ASR sends files. For example, if the specified value is
/NewYork/bs111, the ASR writes files to
/data/collector/NewYork/bs111.
To configure gateways, perform the following steps:
1. Add an entry for each gateway (ASR). The command syntax statement and
example put each parameter on its own line for ease of reading.
Note: See the preceding list for parameter formatting requirements.
> en
# conf t
(config)# pmx subshell aggregation_center
pm extension (aggregation center)> add gateway name gateway-name
region gateway-region
location gateway-area
schema_version bulkstat-schema-version
156
Copyright © 2014, Cisco Systems, Inc.
MURAL Installation Guide for Starter Pack
ip gateway-IP
timezone gateway-timezone
edr-filename-pattern filename-pattern-edr
bulkstat-filename-pattern filename-pattern-bulkstats
type gateway-type
edr-file-path destination-dir-edr-files/edr_or_redr
bulkstat-file-path destination-dir-bulkstats-files
For example:
pm extension (aggregation center)> add gateway name ASR-1
region EAST
location USA
schema_version 15
ip 10.10.10.255
timezone America/New_York
edr-filename-pattern *_MURAL-edr_*_%MM%DD%YYYY%hh%mm%ss.*
bulkstat-filename-pattern *_%YYYY%MM%DD%hh%mm%ss.*
type HA
edr-file-path /NewYork/edr111/edr
bulkstat-file-path /NewYork/bs111
The add gateway command produces trace output similar to the
following:
Adding IBs....
***************Adding ib dcRegionArea***************
Adding in dcRegionArea.map
[dcRegionArea.map]:
generated dc.id
generated region.id
generated area.id
generated dcRegionArea.id.map
Summary:
========
Successful IBs : 1 out of 1
Failed IBs : No id generation failures.
pushing ib [dc.id] to all IB destinations
pushing ib to ip 192.168.151.173
157
MURAL Installation Guide for Starter Pack
This product may be covered by one or more U.S. or foreign pending or
issued patents listed at www.guavus.com/patents.
Guavus Network Systems NetReflex Platform
dc.id
100%
133
0.1KB/s
00:00
This product may be covered by one or more U.S. or foreign pending or
issued patents listed at www.guavus.com/patents.
Summary:
========
Successful Transfers : 4 out of 4
Failed Transfers : No transfer failures.
pushing ib [region.id] to all IB destinations
pushing ib to ip 192.168.151.173
This product may be covered by one or more U.S. or foreign pending or
issued patents listed at www.guavus.com/patents.
Guavus Network Systems NetReflex Platform
region.id
100%
86
0.1KB/s
00:00
***************Added ib dcRegionArea***************
***************Adding ib gateway***************
Adding in gateway.map
[key.map]:
generated key.id.map
[gateway.map]:
generated gateway_bs.id
generated version_bs.id
generated dc_bs.id
generated region_bs.id
generated area_bs.id
generated gatewayRegionArea.id.map
[schema.map]:
generated schema.id.map
158
Copyright © 2014, Cisco Systems, Inc.
MURAL Installation Guide for Starter Pack
[metric.map]:
generated metric.id.map
[gatewayIDVersion.map]:
Summary:
========
Successful IBs : 5 out of 5
Failed IBs : No id generation failures.
pushing all ibs to all IB destinations
pushing ib to ip 192.168.151.173
This product may be covered by one or more U.S. or foreign pending or
issued patents listed at www.guavus.com/patents.
Guavus Network Systems NetReflex Platform
gatewayIDVersion.map 100%
113
0.1KB/s
schemaIDToKeyID.map 100% 8018
7.8KB/s
Gateway_Schema_ASR.map 100% 1779KB
00:00
00:00
1.7MB/s
00:00
Gateway_Schema_GMPLAB1.map 100% 1779KB
1.7MB/s
00:00
Gateway_Schema_GMPLAB2.map 100% 1779KB
1.7MB/s
00:00
Gateway_Schema_NewDelhi.map 100% 1779KB
1.7MB/s
00:00
Gateway_Schema_gmplab1.map 100% 1779KB
1.7MB/s
00:00
Gateway_Schema_gmplab2.map 100% 1779KB
1.7MB/s
00:00
gatewaySchemaMetric.map 100%
key.id.map 100% 1375
key.map 100%
10MB
1.3KB/s
748
0.7KB/s
10.4MB/s
00:00
gateway_bs.id 100%
113
0.1KB/s
00:00
version_bs.id 100%
12
0.0KB/s
00:00
dc_bs.id 100%
113
region_bs.id 100%
area_bs.id 100%
0.1KB/s
31
34
gateway.map 100%
178
schema.id.map 100% 1883
schema.map 100%
816
00:00
0.0KB/s
0.0KB/s
gatewayRegionArea.id.map 100%
00:00
00:00
228
0.2KB/s
0.2KB/s
1.8KB/s
0.8KB/s
00:00
00:00
00:00
00:00
subschema_definition.list 100% 1007KB
metric.id.map 100%
00:00
00:00
679KB 679.1KB/s
1.0MB/s
00:00
00:00
159
MURAL Installation Guide for Starter Pack
metric.map 100%
471KB 471.1KB/s
cube_names.list 100%
384
00:00
0.4KB/s
schemaIDToKeyID.map 100% 8018
00:00
7.8KB/s
Gateway_Schema_ASR.map 100% 1779KB
00:00
1.7MB/s
00:00
Gateway_Schema_GMPLAB1.map 100% 1779KB
1.7MB/s
00:00
Gateway_Schema_GMPLAB2.map 100% 1779KB
1.7MB/s
00:00
Gateway_Schema_NewDelhi.map 100% 1779KB
1.7MB/s
00:00
Gateway_Schema_gmplab1.map 100% 1779KB
1.7MB/s
00:01
Gateway_Schema_gmplab2.map 100% 1779KB
1.7MB/s
00:00
gatewaySchemaMetric.map 100%
10MB
10.4MB/s
00:00
This product may be covered by one or more U.S. or foreign pending or
issued patents listed at www.guavus.com/patents.
Guavus Network Systems NetReflex Platform
_DONE
100%
0
0.0KB/s
00:00
Summary:
========
Successful Transfers : 4 out of 4
Failed Transfers : No transfer failures.
***************Added ib gateway***************
Adding Gateway configs....
***************Gateway***************
Name: ASR-1
Associated Region: EAST
Location: USA
Schema Version: 15
IP: 10.10.10.255
Timezone: America/NewYork
Flow-EDR/Http-EDR Filename Pattern: %DC_MURAL-edr_
*_%MM%DD%YYYY%hh%mm%ss*.gz
Bulkstat Filename Pattern: *_%YYYY%MM%DD%hh%mm%ss
Type: HA
Flow-EDR/Http-EDR Filename Path: /data/collector/NewYork/edr111/edr
160
Copyright © 2014, Cisco Systems, Inc.
MURAL Installation Guide for Starter Pack
Bulkstat Filename Path: /data/collector/NewYork/bs111
***************Successfully Added***************
2. Run the show gateways command and verify that all parameter values
are correct.
If a parameter is defined incorrectly, run the add gateway command
again as detailed in Step 1, providing the correct parameters. At the end of
the trace, you are prompted to confirm that you want to replace the
existing gateway with the newly defined one. If the new parameters are
correct, type yes.
pm extension (aggregation center)> show gateways
3. Add IP addresses for the master and standby GCU nodes.
pm extension (aggregation center)> set collector IPs comma-separatedip-list
For example:
pm extension (aggregation center)> set collector IPs
192.168.1.1,192.168.2.2
Note: These are internal IP addresses.
4. Verify the IP addresses for all Collectors:
pm extension (aggregation center)> show collector IPs
5. Set the BulkStats timezone to UTC in gateway.conf for every gateway.
The reason for this is that the ASR internally changes the time zone to GMT
for the BulkStats file. Edit gateway.conf for every BulkStats source at the
path:
/data/configs/gateway/gateway.conf "timezone": "UTC"
6. Push the gateway configuration to all the Collectors:
pm extension (aggregation center)> push gateway configuration
7. Generate and push all IBs:
161
MURAL Installation Guide for Starter Pack
pm extension (aggregation center)> generate all ibs
pm extension (aggregation center)> push all ibs
pm extension (aggregation center)> quit
pm extension> quit
(config) # write memory
Synchronize the EDR and BulkStats IBs on the Standby GCU
Node
After the above push command completes, run the following command on the
standby GCU node from the CLI configure terminal.
To fetch IBs for EDR data streams, issue the following commands.
(config) # pmx
pm extension> subshell aggregation_center
pm extension (aggregation center)> fetch all ibs from inbox
Copying IB mobileappname.list [OK]
Copying IB segment.map [OK]
Copying IB apnGroup.map [OK]
Copying IB snAppCat.map [OK]
Copying IB privateKey.txt [OK]
Copying IB apnNetworkIB.map [OK]
Copying IB segment.txt [OK]
Copying IB dcNameClli.map [OK]
Copying IB ReservedNames.json [OK]
Copying IB collection_center.list [OK]
Copying IB ratidtype.map [OK]
Copying IB wngib.id.map [OK]
Copying IB pcsarange.list [OK]
Copying IB ServiceGateway.list [OK]
Copying IB dcRegionArea.map [OK]
Copying IB mobileappcategory.list [OK]
Copying IB model.list [OK]
Copying IB snAppIdStr.map [OK]
Copying IB UnresolvedTraps [OK]
Copying IB subdevice.id.mph [OK]
Copying IB ipGgsn.map [OK]
Copying IB keyword.list [OK]
162
Copyright © 2014, Cisco Systems, Inc.
MURAL Installation Guide for Starter Pack
Copying IB sp.list [OK]
Copying IB url.list [OK]
Copying IB ipSgsn.map [OK]
Copying IB SNMPRecordConverterConfig [OK]
Copying IB blacklist [OK]
Copying IB mobileapptype.list [OK]
Copying IB ivFile.txt [OK]
Copying IB smartDG.list [OK]
Copying IB ipProto.map [OK]
Copying IB mime.list [OK]
Copying IB topsubscriberib.id.map [OK]
Copying IB p2pCat.map [OK]
Copying IB tac_manuf_model.map [OK]
Copying IB subseg.id.map [OK]
Copying IB sp_url_cat.map [OK]
Copying IB symmetricKey.txt [OK]
Copying IB manufacturer.list [OK]
Copying IB IBStore.tab [OK]
Copying IB category.list [OK]
Copying IB ua_mobile_app.map [OK]
Copying IB subscriberib.id.map [OK]
Copying IB port_proto_app.map [OK]
Copying IB ua_manuf_model.map [OK]
Copying IB mobileappname.id [OK]
Copying IB bytes.id [OK]
Copying IB segmentname.id [OK]
Copying IB segment.id.map [OK]
Copying IB apn.id [OK]
Copying IB group.id [OK]
Copying IB apnGroup.id.map [OK]
Copying IB snAppProtocolStr.id [OK]
Copying IB snAppCategory.id [OK]
Copying IB snAppCat.id.map [OK]
Copying IB ratid.id [OK]
Copying IB rattype.id [OK]
Copying IB ratidtype.id.map [OK]
Copying IB dc.id [OK]
Copying IB region.id [OK]
163
MURAL Installation Guide for Starter Pack
Copying IB area.id [OK]
Copying IB dcRegionArea.id.map [OK]
Copying IB mobileappcategory.id [OK]
Copying IB model.id [OK]
Copying IB snAppid.id [OK]
Copying IB snAppStr.id [OK]
Copying IB snAppIdStr.id.map [OK]
Copying IB ggsnip.id [OK]
Copying IB ggsn.id [OK]
Copying IB ipGgsn.id.map [OK]
Copying IB sp.id [OK]
Copying IB url.id [OK]
Copying IB sgsnip.id [OK]
Copying IB sgsn.id [OK]
Copying IB ipSgsn.id.map [OK]
Copying IB mobileapptype.id [OK]
Copying IB ipProtocolId.id [OK]
Copying IB ipProtocolStr.id [OK]
Copying IB ipProto.id.map [OK]
Copying IB mime.id [OK]
Copying IB p2pProtocolStr.id [OK]
Copying IB p2pCategory.id [OK]
Copying IB p2pCat.id.map [OK]
Copying IB manufacturer.id [OK]
Copying IB category.id [OK]
Copying IB Destport.id [OK]
Copying IB ipProtoStr.id [OK]
Copying IB appProtoStr.id [OK]
Copying IB port_proto_app.id.map [OK]
pm extension (aggregation center)>
pm extension (aggregation center)> quit
pm extension> quit
(config) # write memory
Fetching BulkStats IBs
To configure your MURAL environment to evaluate BulkStats data, go to the
bulkstats PMX subshell and fetch the IBs from the inbox.
164
Copyright © 2014, Cisco Systems, Inc.
MURAL Installation Guide for Starter Pack
(config) # pmx
pm extension> subshell bulkstats
pm extension (bulk stats)> fetch all ibs from inbox
Copying IB gatewaySchemaMetric.map
[OK]
Copying IB gatewayIDVersion.map
[OK]
Copying IB key.map
[OK]
Copying IB gateway.map
[OK]
Copying IB schema.map
[OK]
Copying IB subschema_definition.list
[OK]
Copying IB metric.map
[OK]
Copying IB cube_names.list
[OK]
Copying IB key.id.map
[OK]
Copying IB gateway_bs.id
[OK]
Copying IB version_bs.id
[OK]
Copying IB dc_bs.id
[OK]
Copying IB region_bs.id
[OK]
Copying IB area_bs.id
[OK]
Copying IB gatewayRegionArea.id.map
[OK]
Copying IB schema.id.map
[OK]
Copying IB metric.id.map
[OK]
pm extension (bulk stats)>
Create a File for Uncategorized URL, UA, and TAC Reports
To create the file needed for reports for uncategorized URLs, UAs, and TACs,
perform the following steps:
1. On the master GCU node, use a text editor such as vi to create a file called
/data/work/serverFile_uncatReports. It specifies the destination
directory for uncategorized URL, UA, and TAC reports.
The delimiter between the four terms in the file is a comma followed by a
space:
IP-address, username, password, destination-local-directory
All parameters refer to the machine to which the reports are to be sent.
The following example directs uncategorized reports to the /data/offline_
uncat_reports directory on the machine with IP address 192.168.156.96.
165
MURAL Installation Guide for Starter Pack
192.168.156.96, admin, password, /data/offline_uncat_reports
2. Repeat Step 1 on the standby GCU node.
Proceed to "Generating and Installing Security Certificates" on the facing page.
166
Copyright © 2014, Cisco Systems, Inc.
MURAL Installation Guide for Starter Pack
Generating and Installing Security Certificates
This topic explains how to generate and install common security certificates for
all Tomcat processes in the distributed Rubix setup. The instructions are for
generating a certificate with the Microsoft certificate authority (CA), but you can
use any third party CA.
Note: MURAL supports the Mozilla Firefox and Chrome browsers.
Backing Up and Generating the Keystore Files
1. Log in to the master Rubix node and make a backup of the original
keystore file.
# cd /data/apache-tomcat/apache-tomcat-version
# mv keystore keystore_orig
2. Run the keytool command to generate the new keystore file. At the
prompts, substitute values as follows for the indicated variables:
l
password—We recommend the value rubix123.
l
domain-name—Fully-qualified domain name, or URL, you are
securing. If you are requesting a wildcard certificate, add an asterisk
(*) to the left of the common name where you want the wildcard; for
example, *.companyname.com.
l
unit—Optional.
l
organization-name—Full legal name of your organization (for
example, Cisco Systems, Inc.).
l
city—Full unabbreviated name of the city in which your organization
is located or registered.
l
state—Full unabbreviated name of the state or province where your
organization is located.
l
country-code—Two-letter International Organization for
167
MURAL Installation Guide for Starter Pack
Standardization (ISO) code for the country in which your organization
is legally registered. It is US for the United States.
# cd /usr/java/latest/bin/
# ./keytool -keysize 2048 -genkey -alias tomcat -keyalg RSA -keystore
keystore
Enter keystore password: password
Re-enter new password: password
What is your first and last name?
[Unknown]:
domain-name
What is the name of your organizational unit?
[Unknown]:
What is the name of your organization?
[Unknown]:
organization-name
What is the name of your City or Locality?
[Unknown]: city
What is the name of your State or Province?
[Unknown]: state
What is the two-letter country code for this unit?
[Unknown]:
country-code
Is CN=domain-name, OU=Unknown, O=organization-name, L=city, ST=state,
C=country-code correct?
[no]:
yes
Enter key password for <tomcat>
(RETURN if same as keystore password):
3. Run the following command to create a Certificate Signing Request (CSR)
file called csr.csr. When prompted for the keystore password, type the
same password as for the last prompt in step 2.
# ./keytool -certreq -keyalg RSA
-alias tomcat -file csr.csr -keystore keystore
4. Open the CSR file in a text editor such as vi , and copy all of the text into the
local paste buffer. You paste the information into the certificate request in
the next procedure.
168
Copyright © 2014, Cisco Systems, Inc.
MURAL Installation Guide for Starter Pack
Downloading the Signed Certificate
To download a signed certificate, perform the following steps:
1. Log in to https://mx1.guavus.com/certsrv .
2. Enter your username and password.
3. From the list of tasks, select Request a certificate.
4. On the next screen, select Advanced Certificate Request.
5. On the next screen (titled Submit a Certificate Request or Renewal
Request), paste in the text from the csr.csr file, which you copied to the
paste buffer in the previous procedure. Select Web Server for the
Certificate Template.
6. Click Download certificate.
7. When the certificate is finished downloading, rename it to tomcat.cer.
8. Move the tomcat.cer file to the /usr/java/latest/bin/ (the same
directory where the on the same path on the server where the keystore file
is generated).
scp tomcat.cer admin@local-IP-address:/usr/java/latest/bin
Downloading the CA Certificate
To download the CA certificate, perform the following steps:
1. Log in to https://mx1.guavus.com/certsrv .
2. Enter your username and password.
3. From the list of tasks, select Download a CA certificate, certificate
chain, or CRL.
4. On the next screen, select Download CA certificate.
5. When the certificate is finished downloading, rename it to root.cer.
6. Move the root.cer file to the /usr/java/latest/bin/ (the same directory
169
MURAL Installation Guide for Starter Pack
where the on the same path on the server where the keystore file is
generated).
scp root.cer admin@local-IP-address:/usr/java/latest/bin
7. Click on the Install CA certificate link.
Note: This option should be used on all client machines to install the CA
certificate on the client, which will be able to identify the certificates signed
using GUAVUS-MX1-CA.
8. Click all the check boxes and then click OK.
Note: If the CA certificate is already installed, the following message is
displayed:
Installing the Signed Certificate in the Keystore
For the CA GUAVUS-MX1-CA, add the root (CA) certificate and signed certificates
(downloaded during the previous procedures) to the key store.
1. Run the following keytool command to install the signed certificates and
root certificate in the key store.
# cd /usr/java/latest/bin/
# ./keytool -import -alias root -keystore keystore -trustcacerts -file
root.cer
Enter keystore password: password
Owner: CN=Guavus-MX1-CA, DC=guavus, DC=com
Issuer: CN=Guavus-MX1-CA, DC=guavus, DC=com
Serial number: 7570961243691e8641088832d6c218a5
Valid from: valid-timestamp until: expiration-timestamp
Certificate fingerprints:
MD5:
170
27:BE:E3:EA:DB:16:A1:79:8B:00:96:A4:54:D8:16:6D
Copyright © 2014, Cisco Systems, Inc.
MURAL Installation Guide for Starter Pack
SHA1: 8B:6B:44:2B:08:FE:3F:C5:8B:B5:AD:FA:EE:DF:E2:A7:07:59:D0:49
Signature algorithm name: SHA1withRSA
Version: 3
Extensions:
#1: ObjectId: 2.5.29.19 Criticality=true
BasicConstraints:[
CA:true
PathLen:2147483647
]
#2: ObjectId: 2.5.29.15 Criticality=false
KeyUsage [
DigitalSignature
Key_CertSign
Crl_Sign
]
#3: ObjectId: 2.5.29.14 Criticality=false
SubjectKeyIdentifier [
KeyIdentifier [
0000: D4 DA C9 FC FB AF 5E 32
BF C0 A3 F1 6F 03 39 C3
......^2....o.9.
0010: 29 36 84 7D
)6..
]
]
#4: ObjectId: 1.3.6.1.4.1.311.21.1 Criticality=false
Trust this certificate? [no]:
yes
Certificate was added to keystore
2. Install the signed certificate. When prompted, type the keystore password.
# ./keytool -import -alias tomcat -keystore keystore -trustcacerts file tomcat.cer
3. Copy the new key store to the apache-tomcat-version directory for all
EDRs, Bulkstats and RGE on both master and standby machines. For the
standby-IP-address variable, substitute the management IP address of the
standby Rubix node.
171
MURAL Installation Guide for Starter Pack
# cp keystore /data/apache-tomcat/apache-tomcat-version
# scp keystore admin@standby-IP-address:/data/apache-tomcat/apachetomcat-version/
4. Open the /data/apache-tomcat/apache-tomcatversion/conf/server.xml file in a text editor such as vi , and verify that
for all Tomcat processes the value of the keystorePass attribute of the
<Connector> tag matches the keystore password given above:
<Connector port="6443" protocol="HTTP/1.1" SSLEnabled="true"
maxThreads="150" scheme="https" secure="true"
keystoreFile="keystore"
keystorePass="rubix123"
clientAuth="false" ciphers=" SSL_RSA_WITH_RC4_128_MD5,
SSL_RSA_WITH_RC4_128_SHA" sslProtocol="TLS" />
Proceed to "Initializing and Verifying Data Feeds from ASRs" on the facing page.
172
Copyright © 2014, Cisco Systems, Inc.
MURAL Installation Guide for Starter Pack
Initializing and Verifying Data Feeds from ASRs
This topic explains how to configure Mobility Unified Reporting and Analytics
(MURAL) to accept EDR and bulk statistics (bulkstats) data files sent by Cisco
ASRs, and how to verify that data is being accepted and processed correctly.
Creating a MURAL User Account for ASRs
To create a MURAL user account for use by ASRs when sending EDR and bulkstats
data files to MURAL, perform the following steps:
1. Use SSH to log in to the master GCU node and create the user account. For
the username and password parameters, specify the username and
password configured on the ASR for EDR and bulkstats file transfers.
# en
> conf t
(config)> username username password password
(config)> write memory
(config)> _shell
2. Use a text editor such as vi to add the indicated two lines to the
/etc/ssh/sshd_config file. They enable recording of PAM-related
authentication errors in the /var/log/messages file.
# mount -o remount,rw /
# vi /etc/ssh/sshd_config
UsePAM no
PasswordAuthentication yes
3. Run the pm process sshd restart command.
# en
> conf t
(config) pm process sshd restart
(config) _shell
# mount -o remount,ro /
4. Repeat steps 1 through 3 on the standby GCU node.
173
MURAL Installation Guide for Starter Pack
Initializing the Data Feed on ASRs
The installation and configuration of MURAL is now complete. Working on an ASR,
initialize the transfer of EDR and bulkstats data files to MURAL. Ensure that the
timestamp portion of the generated filenames corresponds to the current date
and time (or a future date and time), not a time in the past. Make note of the
timestamp, which you use in commands in subsequent sections.
Verifying Data Storage in HDFS
1. Use SSH to log in to the master GCU node and run the indicated hadoop
commands to verify that data from ASRs is arriving at the Collector nodes
and being stored in the Hadoop Distributed File System (HDFS).
For the timestamp portion of the directory name (YYYY/MM/DD/HH/mm),
substitute the value you configured in "Initializing the Data Feed on ASRs"
above. Note in particular that the mm (minutes) variable is a multiple of 5
from 00 through 55 (00, 05, 10, 15, and so on).
> en
# _shell
# hadoop dfs -ls /data/collector/1/output/edrflow/YYYY/MM/DD/HH/mm/*
2>/dev/null
# hadoop dfs -ls /data/collector/1/output/edrhttp/YYYY/MM/DD/HH/mm/*
2>/dev/null
# hadoop dfs -ls /data/collector/1/output/bulkStats/YYYY/MM/DD/HH/mm/*
2>/dev/null
2. If one or more of the hadoop commands do not immediately produce
output, repeat them periodically, and do not proceed to the next procedure
until there is output for all three.
174
Copyright © 2014, Cisco Systems, Inc.
MURAL Installation Guide for Starter Pack
Setting the Data Start Time
To set the start time for MURAL data processing to the timestamp that you
configured in "Initializing the Data Feed on ASRs" on the previous page, perform
the following steps:
1. Use SSH to log in to the master GCU node and make the / file system
writable.
> en
# _shell
# mount -o remount,rw /
2. Run the setOozieTime script.
l
For the dataStartTime parameter, specify the value you configured
in "Initializing the Data Feed on ASRs" on the previous page, in the
following format:
YYYY-MM-DDTHH:mmZ
Again, the mm (minutes) variable is a multiple of 5 from 00 through
55 (00, 05, 10, 15, and so on).
l
For the node parameter, specify the management IP address of the
master GCU node.
# cd /opt/deployment/Mural_setStartTime/
# ./setOozieTime --dataStartTime YYYY-MM-DDTHH:mmZ
--node master-gcu-address
--password admin-password --verbose
For example, if the date is May 1, 2014, 06:00, the dataStartTime
parameter is 2014-05-01T06:00Z.
# ./setOozieTime --dataStartTime
2014-05-01T06:00Z --node
192.168.147.11 --password admin@123 --verbose
Ensure that there is a continuous flow of data into the Hadoop without any
gaps since the specified time.
175
MURAL Installation Guide for Starter Pack
3. Run the setOozieTime script again, this time specifying the management
IP address of the standby GCU node for the node parameter.
# cd /opt/deployment/Mural_setStartTime/
# ./setOozieTime --dataStartTime YYYY-MM-DDTHH:mmZ
--node standby-gcu-address
--password admin-password --verbose
Starting the MURAL Data Processing Jobs
To start the MURAL software jobs that process incoming data, use SSH to log in to
the master GCU node and run the run job all command in the Oozie subshell:
> en
# conf t
(config)# pmx
Welcome to pmx configuration environment.
pm extension> subshell oozie
pm extension (oozie)> run job all
Verify in the output that all jobs started successfully.
Verifying Data Processing on the Compute Blades
To verifying correct processing of EDR data on the Compute blades (which are
Hadoop Data nodes), perform the following steps.
1. Wait two hours after running the run job all command in "Starting the
MURAL Data Processing Jobs" above.
2. Use SSH to log in to the master GCU node.
> en
# _shell
# mount -o remount,rw /
# cd /opt/deployment/Mural_setStartTime/
# ./setOozieTime --dataStartTime data-start-time --node collector-mgmtIP --password admin-password --verbose
176
Copyright © 2014, Cisco Systems, Inc.
MURAL Installation Guide for Starter Pack
3. Display the last timestamp for the Core job.
> en
# _shell
# hadoop dfs -text /data/CoreJob/done.txt 2>/dev/null
4. Display the last timestamp for EDR data cubes being generated by the
EDR job.
# hadoop dfs -text /data/EDR/done.txt 2>/dev/null
5. Display the last timestamp for CubeExporter data cubes.
# hadoop dfs -text /data/CubeExporter/done.txt 2>/dev/null
6. Display the last timestamp for generated and exported bulkstats
data cubes.
# hadoop dfs -text /data/BulkStat/done.txt 2>/dev/null
# hadoop dfs -text /data/BSAgg15min/done.txt 2>/dev/null
# hadoop dfs -text /data/BulkStatExporter_15min/done.txt 2>/dev/null
Verifying Data on the Insta Blades
To verifying correct processing of EDR and bulkstats data on the Insta blades,
perform the following steps.
1. Use SSH to log in to the master Insta node and display the name of the
database for EDR data:
> en
# _shell
# cli -t "en" "conf t" "show runn full" |
grep "insta instance 0 cubes-database" | awk -F ' ' '{print $5}'
database_mural
177
MURAL Installation Guide for Starter Pack
2. Open the idbmysql user interface and specify database_mural as the
database.
# idbmysql
Welcome to the MySQL monitor.
Commands end with ; or \g.
...
mysql> use database_mural;
Database changed
3. Display the values in the mints and maxts columns for the 60-minute bin
class and -1 aggregation level (shown in the first row of the following
example).
mysql> select * from bin_metatable;
+----------+---------------------+------------+------------+---------+
| binclass | aggregationinterval | mints
| maxts
| bintype |
+----------+---------------------+------------+------------+---------+
| 60min
|
-1| 1350126000 | 1350594000 | NULL
|
| 60min
|
86400| 1350086400 | 1350432000 | NULL
|
| 60min
|
604800|
0 |
0 | NULL
|
| 60min
|
2419200|
0 |
0 | NULL
|
+----------+---------------------+------------+------------+---------+
4 rows in set (1.14 sec)
Press Ctrl+D to exit
mysql> Bye
4. Run the date command to convert the values from the mints and maxts
columns to human-readable format. The following example indicates that
data was processed between 11:00 hours on October 13 and 21:00 hours on
October 18.
# date -d @1350126000
Sat Oct 13 11:00:00 UTC 2012
# date -d @1350594000
Thu Oct 18 21:00:00 UTC 2012
5. Display the name of the database configured for bulkstats data:
# cli -t "en" "conf t" "show runn full" |
grep "insta instance 1 cubes-database" | awk -F ' ' '{print $5}
178
Copyright © 2014, Cisco Systems, Inc.
MURAL Installation Guide for Starter Pack
'bulkstats
6. Open the idbmysql user interface and select bulkStats as the database.
# idbmysql
Welcome to the MySQL monitor.
Commands end with ; or \g.
...
mysql> use bulkStats;
Database changed
7. Display the values in the mints and maxts columns for the 900
aggregation interval (shown in the second row in the example).
mysql> select * from bin_metatable;
+----------+---------------------+------------+------------+---------+
| binclass | aggregationinterval | mints
| maxts
| binType |
+----------+---------------------+------------+------------+---------+
| 5min
|
| 5min
-1 |
0 |
0 | NULL
|
|
900 | 1364713200 | 1367293500 | NULL
|
| 5min
|
3600 | 1364713200 | 1365004800 | NULL
|
| 5min
|
86400 | 1364688000 | 1364860800 | NULL
|
| 5min
|
604800 |
0 |
0 | NULL
|
| 5min
|
2419200 |
0 |
0 | NULL
|
+----------+---------------------+------------+------------+---------+
6 rows in set (12.18 sec)
mysql> quit
8. Run the date command to convert the values from the mints and maxts
columns to human-readable format. The following example indicates that
data was processed between 7:00 hours on March 31 and 3:45 hours on
April 30.
# date -d @1364713200
Sun Mar 31 07:00:00 UTC 2013
# date -d @1367293500
Tue Apr 30 03:45:00 UTC 2013
Starting UI Processes and Verifying Data
Start the UI processes and verify UI data. Ensure that the URL is set up in the DNS
for the production system.
179
MURAL Installation Guide for Starter Pack
Note: You should only start UI Tomcat instances after at least two hours of data
has been pushed into the Insta node.
1. Use SSH to log in to the master GCU node and start the EDR process.
> en
# conf t
(config) # pm process rubix restart
(config) # rubix modify-app atlas enable
(config) # rubix modify-app atlas modify-instance 1 enable
2. In a separate terminal, use SSH to log in to the standby GCU node and start
the EDR process.
> en
# conf t
(config) # pm process rubix restart
(config) # rubix modify-app atlas enable
(config) # rubix modify-app atlas modify-instance 1 enable
3. Wait two minutes after issuing the commands in Steps 1 and 2, then run the
following commands on both the master and standby GCU nodes to start the
bulkstats process.
(config) # rubix modify-app bulkstats enable
(config) # rubix modify-app bulkstats modify-instance 1 enable
4. Wait two minutes after issuing the commands in Step 3, then run the
following commands on both the master and standby GCU nodes to start the
reports process.
(config) # rubix modify-app reportAtlas enable
(config) # rubix modify-app reportAtlas modify-instance 1 enable
Note: Wait two minutes before issuing the following commands.
(config) # rubix modify-app rge enable
(config) # rubix modify-app rge modify-instance 1 enable
Note: Wait two minutes before moving to the next step.
180
Copyright © 2014, Cisco Systems, Inc.
MURAL Installation Guide for Starter Pack
5. Wait two minutes after issuing the commands in Step 4, then run the
following commands on both the master and standby GCU nodes to start the
RGE process.
(config) # rubix modify-app rge enable
(config) # rubix modify-app rge modify-instance 1 enable
6. Back up the configurations. For instructions, see the MURAL Operations and
Troubleshooting Guide.
Users can now access the MURAL UI in a browser at
https://domain-name:8443/
where domain-name is name configured via GMS for the UI nodes configuration
details.
At this point, the only existing user account is admin, for which the
recommended password is admin@123. We recommend that you create
nonadministrative accounts for regular users. For instructions, see the
MURAL User Guide.
Because the common certificate installation procedure is not finalized, each user
must also access the URLs for the Bulkstats and RGE ports and accept the
certificate exception at each:
https://domain-name:20443/
https://domain-name:30443/
181
Mandatory Parameters for Incoming ASR Files
The following is the list of mandatory headers that need to be present in files
coming from the ASR so that the MURAL system can deduce meaningful
information.
Mandatory Attributes for Flow EDRs for MURAL
Flow EDR data sent by the ASR platform to the MURAL system must contain the
following attributes:
l
sn-start-time
l
sn-end-time
l
radius-calling-station-id
l
sn-app-protocol
l
p2p-protocol
l
sn-server-port
l
sn-volume-amt-ip-bytes-downlink
l
sn-volume-amt-ip-pkts-uplink
l
sn-volume-amt-ip-pkts-downlink
l
sn-volume-amt-ip-bytes-uplink
For example:
#sn-start-time,sn-end-time,radius-calling-station-id,sn-serverport,sn-app-protocol,sn-volume-amt-ip-bytes-uplink,sn-volume-amtip-bytes-downlink,sn-volume-amt-ip-pkts-uplink,sn-volume-amt-ippkts-downlink,p2p-protocol,ip-server-ip-address,bearer-3gpp rattype,voip-duration,sn-direction,traffic-type,bearer-3gpp
imei,bearer-3gpp sgsn-address,bearer-ggsn-address,sn-flow-endtime,sn-flow-start-time,radius-called-station-id,bearer-3gpp userlocation-information,sn-subscriber-port,ip-protocol,snrulebase,tcp-os-signature,bearer-3gpp charging-id
183
MURAL Installation Guide for Starter Pack
1381518310,1381518337,1000000018,70000,29,20000,20000,182,36,iax,27
.9.126.155,2,1,FromMobile,,,2.2.2.1,27.23.157.2,1381518337,13815183
10,Sushfone-2,231-10-1073-10065,43769,1985,rb31,,2
Mandatory HTTP EDR Attributes for MURAL
HTTP EDR data sent to the MURAL system must contain the following attributes:
l
sn-start-time
l
sn-end-time
l
transaction-downlink-packets
l
transaction-uplink-packets
l
transaction-downlink-bytes
l
transaction-uplink-bytes
l
http-content type
l
radius-calling-station-id
l
http-host
for example:
#sn-start-time,sn-end-time,radius-calling-station-id,transactionuplink-bytes,transaction-downlink-bytes,ip-subscriber-ipaddress,ip-server-ip-address,http-host,http-content type,httpurl,voip-duration,traffic-type,transaction-downlinkpackets,transaction-uplink-packets,bearer-3gpp rat-type,radiuscalled-station-id,tcp-os-signature,bearer-3gpp imei,http-request
method,http-reply code,http-user-agent
1381518310,1381518338,1000000019,15000,15000,1.1.1.1,27.2.248.155,i
mages.craigslist.org,image/png,images.craigslist.org,11,,60,1,1,Sus
hfone-1,,,GET,506 Variant Also Negotiates,"Dalvik/1.6.0 (Linux; U;
Android 4.0.3; Galaxy Nexus Build/ICL53F)"
184
Copyright © 2014, Cisco Systems, Inc.
MURAL Installation Guide for Starter Pack
ASR-Side Configuration
The corresponding configuration on the side of the ASR platform is as follows:
edr-format edr-flow-format
attribute
sn-start-time format seconds priority 1
rule-variable ip subscriber-ip-address
priority 4
attribute sn-subscriber-port priority 6
attribute
sn-start-time format seconds priority 10
attribute sn-charge-volume ip pkts uplink priority 12
attribute sn-charge-volume ip pkts downlink priority 13
attribute sn-charge-volume ip bytes uplink priority 14
attribute sn-charge-volume ip bytes downlink priority 15
attribute sn-direction priority 21
rule-variable bearer ggsn-address
;rule-variable bearer 3gpp2 bsid
priority 23
priority 24
attribute
sn-flow-start-time format seconds priority 26
attribute
sn-flow-end-time format seconds priority 27
attribute sn-flow-id priority 28
attribute sn-closure-reason priority 29
attribute radius-calling-station-id priority 30
rule-variable p2p protocol
priority 31
rule-variable bearer 3gpp imsi
priority 35
attribute radius-called-station-id priority 40
rule-variable ip server-ip-address
priority 60
attribute sn-server-port priority 70
attribute sn-app-protocol priority 80
attribute sn-parent-protocol priority 81
rule-variable ip protocol
priority 82
attribute sn-volume-amt ip bytes uplink priority 100
attribute sn-volume-amt ip bytes downlink priority 110
attribute sn-volume-amt ip pkts uplink priority 120
attribute sn-volume-amt ip pkts downlink priority 130
rule-variable bearer 3gpp charging-id
priority 140
rule-variable bearer 3gpp imei priority 141
rule-variable bearer 3gpp rat-type
priority 142
rule-variable bearer 3gpp user-location-information
rule-variable bearer 3gpp sgsn-address
priority 143
priority 144
rule-variable traffic-type priority 160
185
MURAL Installation Guide for Starter Pack
rule-variable voip-duration priority 170
attribute
sn-end-time format seconds priority 180
#exit
edr-format edr-http-format
rule-variable ip subscriber-ip-address
priority 4
attribute sn-charge-volume ip pkts uplink priority 8
attribute sn-charge-volume ip pkts downlink priority 9
attribute
sn-start-time format seconds priority 10
attribute sn-charge-volume ip bytes downlink priority 11
attribute sn-charge-volume ip bytes uplink priority 12
attribute
sn-end-time format seconds priority 20
attribute radius-calling-station-id priority 30
attribute radius-called-station-id priority 40
rule-variable ip server-ip-address
rule-variable http user-agent
rule-variable http host
priority 50
priority 55
priority 70
rule-variable http content type
priority 80
attribute transaction-downlink-bytes priority 90
attribute transaction-uplink-bytes priority 100
attribute transaction-downlink-packets priority 110
attribute transaction-uplink-packets priority 120
rule-variable bearer 3gpp charging-id
priority 160
#exit
186
Copyright © 2014, Cisco Systems, Inc.
Manually Manufacturing the Master GCU Blade
Note: This topic describes an alternative method for manufacturing the master
GCU blade. Perform the procedures in this topic only if you are not using the
standard method, which uses GMS Lite as described in "Manufacturing the Master
GCU Blade Using GMS Lite" on page 81. Perform the procedures in this topic only
if you are not using the standard method.
The master GCU blade hosts the master General Management System (GMS)
node, the MURAL platform component that enables centralized installation and
monitoring of all the blades on the MURAL system. (It also hosts the Collector and
UI/Caching nodes.)
Follow these steps to manufacture (install the operating system software on) the
master GCU blade.
Before you begin:
l
Configure Serial over LAN (SOL) on all the blades during EMC setup.
l
Locate your CIQ, and refer to it for such details as UCS access credentials
and KVM SOL IP address.
To manufacture the master GCU blade, perform the following steps:
1. Download the ISO image included with the MURAL software package to the
machine from which you will access the Cisco UCS blades.
The ISO image filename is
/data/mfgcd-guavus-x86_64-20143015-133212.iso
To verify the MD5 checksum of the image, run the md5sum filename
command.
# md5sum /data/mfgcd-guavus-x86_64-20143015-133212.iso
48fc39f77bc7ab8847ca2f7a684d2ace /data/mfgcd-guavus-x86_64-20143015133212.iso
2. Fetch and mount the MURAL ISO image file. For the local-directory-path
variable, substitute the directory to which you downloaded the image file in
187
MURAL Installation Guide for Starter Pack
Step 1. A trace similar to the one shown after the image mount command
indicates that the mount operation is succeeding.
(config) # image fetch scp://laptop-IP-Address/local-directory-path/
mfgcd-guavus-x86_64-20140315-133212.iso
(config)# image mount mfgcd-guavus-x86_64-20140315-133212.iso
Copying linux...
Copying rootflop.img...
Copying image.img...
3. Verify that the image file has been copied to the standard directory.
(config) # ls /var/opt/tms/images
mfgcd-guavus-x86_64-20140315-133212.iso
4. Open the Cisco UCS - KVM Launch Manager in a browser and enter your
login credentials.
Note: For best performance, access the KVM Launch Manager in Firefox
with Java version 6 or greater.
The UCS - KVM Launch Manager application opens and displays all blades
available on the chassis.
5. Click the Launch button for the first node (Server1 in the following figure).
Click OK to download and open the kvm.jnlp file.
188
Copyright © 2014, Cisco Systems, Inc.
MURAL Installation Guide for Starter Pack
Click OK to clear the keyboard access warning message that appears.
The console for the port opens.
6. Navigate to KVM Console > Virtual Media, click Add Image, and
specify the path of the ISO image that you downloaded in Step 1.
7. Click the check box in the Mapped column for the added ISO image, which
is then mounted.
8. Reboot the blade to use the newly mounted image. Access the KVM tab and
select Ctrl-Alt-Del from the Macros > Static Macros drop-down menu.
189
MURAL Installation Guide for Starter Pack
9. When the boot screen appears, press F6 to select the boot menu.
10. Select Enter Setup to open the setup utility.
190
Copyright © 2014, Cisco Systems, Inc.
MURAL Installation Guide for Starter Pack
11. On the Boot Options tab, verify that the value in the Boot Option #1
field is *CD/DVD as shown in the following figure. If you change the value,
press the F10 key to save and exit; if the value is already correct, press
the Esc key to exit.
12. At the # prompt, run the manufacture command to manufacture the
master GCU blade.
191
MURAL Installation Guide for Starter Pack
The following command is appropriate for a single disk configured as
RAID 1, as indicated by the -L 1D argument. The master GCU blade has a
second disk that functions as the mirrored disk in the RAID configuration.
# manufacture.sh -v -t -f /mnt/cdrom/image.img –m 1D –L 1D --cc no
--cs no -–cl no –a
Output similar to the following traces the manufacturing process.
Running
/etc/in it .d/rcS .d/S34automf g
— Automatic manufacture is not enabled. Type ‘automf g’ to start it.
BusyBox v1.00 (2010.12.03—23:16+0000) Built—in shell (ash)
Enter ‘help’ for a list of built—in commands.
—sh: can’t access tty: job control turned off
Processing
/etc/profile... Done
# manufacture.sh —v —t —f /mnt/cdrom/image.img
======
Starting manufacture at YYYYMMDD—hhmmss
======
Called as: /sbin/manufacture.sh —v —t —f /mnt/cdrom/image.img
===============================================
Manufacture script starting
===============================================
---------------------------------Model selection
----------------------------------
Product model (‘?‘ for info) ( DESKTOP VM VM_2D 1D 2D 3D 4D 2D_EXT4)
[2D]: 2D
== Using model: 2D
---------------------------------Kernel type selection
---------------------------------Kernel type (uni smp) [smp]:
== Using kernel type: smp
192
Copyright © 2014, Cisco Systems, Inc.
MURAL Installation Guide for Starter Pack
---------------------------------Layout selection
---------------------------------Layout (STD) [2D]:
== Using layout: 2D
---------------------------------Partition name—size list selection
----------------------------------
Partition name—size list [VAR 40960
SWAP 40960 ]:
== Using partition name—size list: VAR 40960 SWAP 40960
---------------------------------Device list selection
---------------------------------Device list [/dev/sda
/dev/sdb]
== Using device list: /dev/sda
/675dev/sdb
---------------------------------Interface list selection
---------------------------------Interface list [eth0
eth1]:
== Using interface list: eth0
eth1
---------------------------------Interface naming selection
---------------------------------Interface naming [none]:
== Using interface naming: none
== Smartd enabled
---------------------------------CMC server settings
---------------------------------Enable CMC server (yes no) [yes]: no
Enable CMC server (yes no) [yes]: no
== CMC server enabled: no
193
MURAL Installation Guide for Starter Pack
---------------------------------CMC client settings
---------------------------------Enable CMC client (yes no) [yes]: no
== CMC client enabled: no
== CMC client auto—rendezvous enabled: no
== CMC server address for rendezvous: (none)
---------------------------------Cluster settings
---------------------------------Enable cluster (yes no) [no]: no
== Cluster enable: no
Cluster ID:
Number Start End Size File system Mame Flags
1 O.OZMiB 30000MiB 30000MiB ext3 primary
Writing partition table to DISICZ
Disk devsdb: S7Z3Z6MiB
Sector size (logicalphysical): 512B,51ZB
Partition Table: gpt
Number Start End Size File system Mame Flags
1 O.OZMiB 5?Z3Z6MiB 5?ZJZ6MiB ext3 primary
==== Making filesystems
== Creating
ext3
filesystem on /dev/sda2 for BOOT1
== Creating
ext3
filesystem on /dev/sda3 for BOOT2
== Creating
ext3
filesystem on /dev/sda1 for BOOTMGR
== Creating
ext3
filesystem on /dev/sda8 for CONFIG
== Nothing
to do on /dev/sda9
for
HA
== Creating
ext3
filesystem on /dev/sda5 for ROOT1
== Creating
ext3
filesystem on /dev/sda6 for ROOT2
== Making swap on /dev/sda7
for SWAP
Setting up swapspace version 1, size = 42952404 kB
== Creating
ext3
filesystem on /dev/sda10 for VAR
== CMC server address for rendezvous: (none)
194
Copyright © 2014, Cisco Systems, Inc.
MURAL Installation Guide for Starter Pack
---------------------------------Cluster settings
---------------------------------== Cluster
enable: no
Cluster ID:
== Cluster ID: (none)
Cluster
description:
== Cluster
description: (none)
Cluster interface:
== Cluster
interface: (none)
Cluster master virtual IP address [0.0.0.0]:
== Cluster
master
virtual
IP address: 0.0.0.0
Cluster master virtual IP masklen [0]:
== Cluster master virtual IP masklen:
0
Cluster shared secret :
== System successfully imaged
—— Writing Host ID: 3b0455ef813d
== Zeroing the destination partition disk /dev/sda9 with dd
== Calling imgverify to verify manufactured system
== Using layout: ZD
Using dev list:
/dev/sda /dev/sdb
== Verifying image location 1
=== Mounting partitions
=== Checking manifest
=== Unmounting
partitions
=== Image location 1 verified successfully.
== Verifying image location 2
=== Mounting partitions
=== Checking manifest
=== Unmounting partitions
=== Image location 2 verified successfully.
== Done
======
Ending manufacture at YYYYMMDD—hhmmss
—— Manufacture done.
#
13.
1. In the UCS-KVM Launch Manager, navigate to KVM Console >
195
MURAL Installation Guide for Starter Pack
Virtual Media and click the check box in the Mapped column of the ISO
image file, to remove the check and unmount the file.
14. Run the reboot command to reboot the blade with the new ISO image.
# reboot
15. Use SSH to log in to the master GCU blade as user admin.
Return to "Verifying Zoning and FLOGI on the Fabric Interconnects" on page 86
and perform the standard steps from there onward.
196
Copyright © 2014, Cisco Systems, Inc.
Download