Configuration

advertisement
ALLVAC INFORMATION TECHNOLOGIES:
ORACLE RAC ON LINUX
Version 1 Revision 2
3/6/2016
Allvac
2020 Ashcraft Avenue.
Monroe, NC 28110
http://www.allvac.com
Revision History
The following table lists a chronological history of changes to this document.
V#r#
Date
Description of Change
Technical
Communicator
Subject Matter
Expert (SME)
V1r0
11/15/2004
Original
Ryan Frank
Ryan Frank
V1r1
01/14/2005
Removed Windows
sections
Ryan Frank
Ryan Frank
V1r2
10/26/2005
Updated to include the
new NFS server
Ryan Frank
Ryan Frank
Allvac
Confidential & Proprietary
3/6/2016
Page i
Contents
REVISION HISTORY ................................................................................................................ I
CONTENTS ........................................................................................................................... II
INTRODUCTION .................................................................................................................... 4
Scope ............................................................................................................................... 4
Purpose ............................................................................................................................ 4
HARDWARE SELECTION .......................................................................................................... 5
Manufacturer.................................................................................................................... 5
Configuration ................................................................................................................... 5
BASE: OPERATING SYSTEM INSTALL/SETUP ............................................................................... 6
RedHat Linux .................................................................................................................... 6
OS Build/Install ........................................................................................................... 6
Step 1: Insert CD-Rom ............................................................................................. 6
Step 2: Insert Floppy ................................................................................................ 6
Step 3: Boot from CD-Rom ........................................................................................ 6
Step 4: Initiate the Install ......................................................................................... 6
Configure OS (Network, Security, Authentication, etc…).................................................... 7
Step 1: Network Configuration ................................................................................... 7
Step 2: Hostname Configuration ................................................................................ 7
Step 3: DNS Resolution ............................................................................................. 9
Step 4: Security Configuration/Validation .................................................................... 9
Step 5: Reboot ...................................................................................................... 10
Setup Resources ........................................................................................................ 11
Step 1: Identify/Install module for SAN HBA .............................................................. 11
Step 2: Add SCSI LUN’s .......................................................................................... 12
Step 3: Creating multi-path device (From ONE node).................................................. 13
ORACLE: OPERATING SYSTEM RESOURCES/CONFIGURATION ........................................................ 17
RedHat Linux .................................................................................................................. 17
Account Creation/Authentication .................................................................................. 17
Step 1: Create account ........................................................................................... 17
Step 2: Create Group.............................................................................................. 17
Step 3: Create home directory ................................................................................. 17
OCFS Install/Configure ............................................................................................... 18
Step 1: Mount source location .................................................................................. 18
Allvac
Confidential & Proprietary
3/6/2016
Page ii
Step 2: Compile Source .......................................................................................... 18
Step 4: Setup OCFS configuration ............................................................................ 21
Step 5: Create OCFS FileSystems (Only to be done on ONE node) ................................ 22
Step 6: Mount OCFS Volumes .................................................................................. 23
Step 7: Reboot ...................................................................................................... 23
External Resource Changes (DNS, Network, RSH, etc…) ................................................. 24
Step 1: Enable R-Tools ........................................................................................... 24
Step 2: Setting up hosts.equiv ................................................................................. 24
ORACLE: INSTALL/CONFIGURE .............................................................................................. 25
RAC ................................................................................................................................ 25
Database Software ......................................................................................................... 25
DBCA: Database Creation ............................................................................................... 25
Warehouse Builder ......................................................................................................... 25
Current RAC Instances ............................................................................................... 26
VIP Information ......................................................................................................... 26
GLOSSARY ......................................................................................................................... 27
Allvac
Confidential & Proprietary
3/6/2016
Page iii
Introduction
Allvac has selected Oracle as the primary database solution for the Data Warehouse. In
developing the requirements for this solution a standard was developed. This document, in its
entirety, will be the standard for Oracle installations for future deployments.
This document details cases for individual installations, and can be applied to all instances within
the company. Any changes and or updates should be directed and maintained by the development
team.
Utilizing this document will ensure that our Oracle environment maintains consistency across all
future installation, thus allowing any administrator to service, and or maintain all the associated
resources the same way on all deployments.
SCOPE
This document covers the install and configuration of resources associated with Oracle RAC and the
implementation within Allvac. This includes the data warehouse, and associated resources.
PURPOSE
The purpose of this document is to provide any IT administrator with the necessary information to
obtain, build, and deploy a duplicate configuration of the selected configuration. Included are the
specs of the hardware, operating system selections and installations, and application configuration.
There may be variations to a future implementation, and in that event this document should be
updated and noted to include any additional changes or updates.
Allvac
Confidential & Proprietary
3/6/2016
Page 4
Hardware Selection
MANUFACTURER
The selected hardware manufacturer for Oracle installations within Allvac is Compaq/HP. These
machines have proven to be a reliable and cost effective solution to the requirements set by Allvac.
The default vendor for the equipment has been CDW.
While Compaq/HP is not the only vendor or solution available, to maintain consistency within builds
and environments they will be used for ALL nodes in the Oracle RAC configuration. Listed below
are the configuration specification of the machines and their respective functions.
CONFIGURATION
Database backend servers are going to be the Compaq/HP DL-580 model (pictured below)
This model provides the Oracle Database with 4x2.7ghz processors and 12gb of RAM. These
machines have multiple power supplies (for redundancy) along with dual internal 1gb NIC (network
interface cards). Included in the base configuration are 6 64bit 100mhz PCI-X I/O slots (4 hot
swappable).
The AS/Front End servers are going to be Compaq/HP DL-380’s (pictured below)
This model provides the Application servers with 2x2.7ghz processors, 8gb of RAM, and 2x1gb NIC’s. These models
have 3 PCI-X expansion slots, 1 at 33mhz and 2 at 100mhz, and 2 power supplies all in a 2U configuration.
Allvac
Confidential & Proprietary
3/6/2016
Page 5
Base: Operating System Install/Setup
REDHAT LINUX
RedHat Linux Advanced Server 3.0 (Update 3) has been selected as the default Linux based
operating system within Allvac. RedHat Linux provides an automated build system called
“KickStart” to help facilitate builds of many machines quickly and efficiently.
A build server (ALVMNRCFSR03.ALV.CORP.COM) has been configured to handle the installation
software as well as applications. The KickStart process will automatically mount the appropriate
NFS volume necessary to complete the installation. In the event additional packages are needed
the volume can be mounted manually at: alvmnrcfsr03.alv.corp.com:/export/kickstart. This mount
will provide access to the install CDs shipped with AS (Advanced Server) 3.0.
A configuration file and boot cd has been created, based on the AS 3.0 CD’s, and labeled for the
Oracle RAC configuration. The CD image and KickStart file are available off the network share used
for the installation.
OS Build/Install
The Allvac kickstart has a few steps to begin the installation. Below are listed the steps to begin
and successfully complete the Allvac install of AS 3.0:
Step 1: Insert CD-Rom
Insert the appropriate boot cdrom into the CD-Rom of the desired host to be
installed.
Step 2: Insert Floppy
Insert the appropriate KickStart disk into the floppy drive of the same host. This
floppy will have the ks.cfg file that will select packages, drive configuration, and post
install options for your specific install
Step 3: Boot from CD-Rom
Boot the host, ensuring that the boot order specifies the CD-Rom drive to be first in
the boot order. This will ensure the host will attempt to boot from the CD as
opposed to an existing configuration on the hard drive.
Step 4: Initiate the Install
When the default boot menu is displayed either hit `enter` to continue, or type
Allvac_Linux (Caps sensitive) and hit `enter`.
At this time the host will begin an install of the operating system. The total build time takes
approximately 10 minutes to complete (Depending on network traffic). When the install completes
the system will be ready for reboot. Remove the floppy and CD-Rom from the host and allow the
system to reboot.
Allvac
Confidential & Proprietary
3/6/2016
Page 6
Configure OS (Network, Security, Authentication, etc…)
The following steps will be used to configure the new operating system with the options specific to
the installation being configured. Several pieces of information that the installer should have are:




IP Address for all necessary interfaces
Hostname as registered in Active Directory DNS
Netmask for appropriate network
Gateway for the associated network
Step 1: Network Configuration
Setup IP address and other network configuration for each interface
In the /etc/sysconfig/network-scripts directory modify the ifcfg-eth file for each
interface to be configured. This example will be for eth0 on our test host. The
options specified below may vary in your installation (options go line by line):
DEVICE=eth0
Set the DEVICE to the instance you are configuring
BOOTPROTO=none
This will tell the interface not to use DHCP as an option, but use the specified IP
address.
ONBOOT=yes
Will force the adaptor to come online upon boot.
HWADDR=<based on your install>
This option should be left alone as it is specific to the card installed in your machine.
TYPE=Ethernet
Set the media type for the interface. Ethernet is the default.
IPADDR=<Your IP Address>
This is the IP address that was supplied in your registration with the network team.
NETMASK=<Your supplied netmask>
The network team will provide you with the netmask for the supplied IP, in most
cases this is 255.255.255.0
GATEWAY=<Your supplied gateway>
This entry, also provided by the network team, will be the IP to use when packets
need to be routed to another network.
USERCTL=no
Non-Root users cannot control this device.
PEERDNS=no
This interface will not modify the /etc/resolv.conf, we want to set this manually
Step 2: Hostname Configuration
The next step is to set the hostname. This is completed by modifying the /etc/hosts
file and including the IP address and name of your host in the following format.
Allvac
Confidential & Proprietary
3/6/2016
Page 7
By default the /etc/hosts file comes with only the loopback (127.0.0.1) address
configured with localhost and localhost.localdomain configured. We will be modifying
to include the FQDN (Fully Qualified Domain Name) and IP address.
NOTE: It is a best practice to utilize DNS for name lookup. We will setup only the
hostname in the hosts file. The hostname and all associated aliases will be available
via the AD (ActiveDirectory) DNS infrastructure.
With your editor open the /etc/hosts file for editing. The default configuration looks
similar to this:
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1
localhost.localdomain localhost
Open a new line below the localhost entry; our addition will follow the default
convention of IP<tab>FQDN<space>HOSTNAME
In our configuration we are setting a node in an Oracle RAC cluster. It is important
to include all nodes in the /etc/hosts file to ensure the nodes can communicate with
each other in the event DNS is down, or public network infrastructure is interrupted.
Our /etc/hosts file for node 1 of a two node cluster looks like this:
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1
localhost.localdomain localhost
# Public IP's
159.59.3.40
alvmnrlnx001.alv.corp.com alvmnrlnx001
159.59.3.41
alvmnrlnx002.alv.corp.com alvmnrlnx002
# Private IP's
192.168.0.1
ora-cluster-1
192.168.0.2
ora-cluster-2
# Public VIP Address
159.59.3.27
allvac_ods1.alv.corp.com allvac_ods1
159.59.3.28
allvac_ods2.alv.corp.com allvac_ods2
NOTE: the inclusion of the Private IP’s. These are configured on the eth1 interface.
We do not use a FQDN for these interfaces as they are private to only the cluster
nodes. The Public VIP Address’s are configured in the Oracle installation. We have
included them for the same reason as why we put the hostname entry.
We have also selected to comment the entries to help identify them at later dates.
Allvac
Confidential & Proprietary
3/6/2016
Page 8
Step 3: DNS Resolution
The next step is to complete the name resolution configuration used within Allvac.
This is completed by modifying the /etc/resolv.conf file.
Open the /etc/resolv.conf file with your editor and validate the current configuration.
When the system is initially built, it uses DHCP to obtain a network address. When
DHCP obtains an IP address it must also populate the DNS resolution configurations
to ensure it can speak to resources on the network. This default has been saved and
can be reused in our production configuration, we will simply validate the entries are
correct.
The default /etc/resolv.conf file created during the install will look like this:
; generated by /sbin/dhclient-script
search alv.corp.com
nameserver 159.59.130.19
nameserver 159.59.2.62
The entries have the following meanings, and can be modified if desired results are
different than specified in this document.
search alv.corp.com
The first entry on this line will be the DNS domain this host is part of. It will also
specify the DNS domains you would like the host to check when looking up an
address. This line entry can have multiple entries (Space separated, same line),
however the first entry will be the default domain.
nameserver 159.59.130.19
The nameserver line specifies the DNS server to query when performing a name
lookup. In most cases one is enough, however it is always nice to have additional
entries in the event the first one is not responding. In our case we have specified
two servers on two different networks (They hold the same data, one is the backup
for the other).
Save and exit the /etc/resolv.conf file to activate the configuration, as this file is read
each time a query is run changes take effect immediately.
Step 4: Security Configuration/Validation
We are now prepared to validate the systems configuration to ensure necessary
services are configured, and unnecessary services are disabled. We will cover the
specific services of Telnet and SSH.
By default the system installs SSH active and Telnet inactive. This is the way the
system should remain. Later in the Oracle install RSH services are installed,
however Telnet should ALWAYS remain inactive.
Verify SSH is working by typing:
`ssh localhost`
The system should prompt you to add the host key to the known_hosts file, then
prompt for your password. Later in this installation we will cover how this is
automated.
Allvac
Confidential & Proprietary
3/6/2016
Page 9
Verify Telnet is disabled by typing:
`telnet localhost`
You should receive the following error message
[root@alvmnrlnx001 network-scripts]# telnet localhost
Trying 127.0.0.1...
telnet: Unable to connect to remote host: Connection refused
This error validates that Telnet is disabled.
Step 5: Reboot
At this time we are ready to reboot the host to allow the previous changes to be
applied. While this can be done by restarting the necessary services, it is best
practice to reboot the machine to complete the configuration.
Allvac
Confidential & Proprietary
3/6/2016
Page 10
Setup Resources
The next step in this process is to setup and external resources for this installation. This document
identifies the configuration of SAN based disks that are assigned to the host. These SAN disks will
be used as the shared storage in the Oracle RAC.
The process of allocating and assigning the SAN disks is not covered in this document. Refer to the
appropriate IT administrator to find the process and or contact for getting SAN resources assigned
to your host.
The SAN administrator will provide you with the necessary information regarding the storage you
requested. This information will include the LUN number(s) and sizes of assigned resources. When
we have this information we are able to proceed.
Step 1: Identify/Install module for SAN HBA
All of our SAN based hosts use Emulex HBA’s. RedHat provides their supported
Emulex HBA driver by default in the AS installation. It is compiled as a module
called lpfc.
When the system is booted the Operating system should scan and automatically
activate all cards that it finds. We can validate the installation of the LPFC driver by
typing:
`lsmod | grep lpfc`
The expected results will look similar to this:
[root@alvmnrlnx001 network-scripts]# lsmod | grep lpfc
lpfc
248464
3
scsi_mod
114344
3
[sg cciss lpfc sd_mod]
This result will validate that the LPFC driver has been loaded and is functioning.
If the result of the lsmod command does not display the driver we should try and
load the module to see if it works, this is done with the following command:
`insmod lpfc`
Re-Run the lsmod command and verify that the driver has loaded. If it is successful
add the following line to the /etc/modules.conf to ensure the driver will load on boot.
alias scsi_hostadapter lpfc
If you have multiple SCSI cards in the host, your modules.conf may look like this:
alias scsi_hostadapter lpfc
alias scsi_hostadapter1 cciss
Allvac
Confidential & Proprietary
3/6/2016
Page 11
If you had to add the entry to the /etc/modules.conf the following command to
ensure the initrd image is aware of the module for booting.
NOTE: This step is ONLY necessary if the LPFC driver did NOT load automatically on
boot. 99% of the installations the driver will automatically be installed, and this step
will have already been completed.
WARNING: This process can potentially render the default kernel corrupt, thus not
allowing the system to boot! Only proceed if you are sure you know this is
necessary. There is a secondary kernel to boot from on the boot loader menu, use
this if your image becomes corrupt and copy your backup copy over the image you
created (You did create a backup correct :).
From the /boot directory, copy the existing initrd image to a backup copy:
`cp /boot/initrd-2.4.21-20.ELsmp.img /boot/initrd-2.4.2120.ELsmp.img.backup`
When this is completed we can then generate a new image with the following
command:
/sbin/mkinitrd –f initrd-2.4.21-20.ELsmp.img `uname –r`
After this is completed the kernel will load the drivers in the /etc/modules.conf (when
the initrd was generated) at boot.
At this time you can reboot the machine to verify the driver loaded. If you still
cannot see the driver loaded please see a Unix administrator to further evaluate DO
NOT COMPLETE the process.
Step 2: Add SCSI LUN’s
The next step is to validate the LUN(s) from the SAN have been added correctly to
the system. This process is completed by querying the kernel to see if the devices
have been discovered correctly. To complete this process run the following
command to display all SCSI drives on the system:
`cat /proc/scsi/scsi`
[root@alvmnrlnx001 boot]# cat /proc/scsi/scsi
Attached devices:
Host: scsi0 Channel: 00 Id: 00 Lun: 10
Vendor: HITACHI
Type:
Model: OPEN-9
Direct-Access
Rev: 0117
ANSI SCSI revision: 02
Host: scsi0 Channel: 00 Id: 00 Lun: 11
Allvac
Confidential & Proprietary
3/6/2016
Page 12
Vendor: HITACHI
Type:
Model: OPEN-9
Direct-Access
Rev: 0117
ANSI SCSI revision: 02
Host: scsi0 Channel: 00 Id: 00 Lun: 12
Vendor: HITACHI
Type:
Model: OPEN-9
Direct-Access
Rev: 0117
ANSI SCSI revision: 02
When the request for SAN storage space has been completed, the issuing
administrator will provide you with a number for each LUN assigned (make sure to
request the decimal number). The LUN field in this query will let you know if your
LUN(s) have been added.
NOTE: In RedHat Linux, unless the LUN(s) begin with 0 they will probably not be
automatically discovered. This will require you to manually add them.
If you do not see the LUN(s) you were expecting, we can issue a command (to be
added to the /etc/rc.local script) that will discover the particular LUN(s) you need.
echo "scsi add-single-device 0 0 0 10" > /proc/scsi/scsi
This command will add LUN 10 to the system. There are only 2 options that will
potentially change based on your system configuration.
In the `echo “scsi add-single-device R C T L” > /proc/scsi/scsi` only the values for R
and L should ever change. The R value is the controller number, and L number is
the decimal number provided by the administrator who assigned the SAN resource.
In a standard configuration the 1st controller is 0 and the 2nd controller would be 1.
This example show three LUNs added on controller 0 (or the first controller):
echo "scsi add-single-device 0 0 0 10" > /proc/scsi/scsi
echo "scsi add-single-device 0 0 0 11" > /proc/scsi/scsi
echo "scsi add-single-device 0 0 0 12" > /proc/scsi/scsi
When you run the command in the beginning of this step again, you should see all
the LUNs displayed.
NOTE: If you have more than one controller configured, you will see double the LUNs
you would normally expect.
Copy the `echo` command lines into the bottom of the /etc/rc.local file to ensure
they will be mounted every reboot.
The drives are now ready for configuration/format.
Step 3: Creating multi-path device (From ONE node)
In the event you have multiple SAN cards (for redundancy) we can now setup the
multipath devices to ensure if one card, or SAN switch goes down, all traffic will be
moved to the other controller.
This is important to ensure the stability in the data and access to the SAN resources.
Allvac
Confidential & Proprietary
3/6/2016
Page 13
The first part of this is to identify which LUNs are the duplicates of eachother. After
you have run the `cat /proc/scsi/scsi` each entry with a Type: Direct-Access is a
SCSI disk. Each of these entries has a /dev/sd entry. The first one is /dev/sda, the
second is /dev/sdb, and so forth. In this example you will see there are 4 SCSI
disks, the 2 we are concerned with are the last two (/dev/sdc and /dev/sdd):
[root@alvmnrlnx003 proc]# cat /proc/scsi/scsi
Attached devices:
Host: scsi0 Channel: 00 Id: 00 Lun: 00
Vendor: IBM
Type:
Model: DPSS-309170N
Direct-Access
Rev: S80D
ANSI SCSI revision: 03
Host: scsi2 Channel: 00 Id: 00 Lun: 00
Vendor: MegaRAID Model: LD0 RAID5 52500R Rev: h132
Type:
Direct-Access
ANSI SCSI revision: 02
Host: scsi2 Channel: 04 Id: 06 Lun: 00
Vendor: ESG-SHV
Type:
Model: SCA HSBP M9
Processor
Rev: 0.10
ANSI SCSI revision: 02
Host: scsi2 Channel: 05 Id: 06 Lun: 00
Vendor: ESG-SHV
Type:
Model: SCA HSBP M9
Processor
Rev: 0.10
ANSI SCSI revision: 02
Host: scsi3 Channel: 00 Id: 00 Lun: 00
Vendor: HITACHI
Type:
Model: DF600F
Direct-Access
Rev: 0000
ANSI SCSI revision: 03
Host: scsi4 Channel: 00 Id: 00 Lun: 00
Vendor: HITACHI
Type:
Model: DF600F
Direct-Access
Rev: 0000
ANSI SCSI revision: 03
Notice these two LUNs have the same LUN number, but off different controllers.
These two disks are the same disk. We will use mdadm to create a multipath device
to simplify access and provide redundant access. This is completed by using the
mdadm tool to create a new device. Below is listed to command syntax to perform
this function:
`/usr/sbin/mdadm –-create /dev/md0 –-level multipath –-raid-devices 2
/dev/sdc /dev/sdd`
This command will create a device /dev/md0 that we will use to create our file
system. This command would be run, creating a new device for each pair of SAN
LUNs attached to the system.
Allvac
Confidential & Proprietary
3/6/2016
Page 14
After the MD array’s are started, we need to add the entries to the /etc/mdadm.conf
file to ensure they are started every reboot of the host. This process is completed by
identifying the UUID of each array we created. To gather the UUID of each array run
the following command:
`/sbin/mdadm –D /dev/<array device>` ex: /sbin/mdadm –D /dev/md0
This command will display results similar to the following:
mdadm -D /dev/md0
/dev/md0:
Version : 00.90.00
Creation Time : Thu Oct 28 13:55:48 2004
Raid Level : multipath
Array Size : 545120192 (519.87 GiB 558.20 GB)
Raid Devices : 1
Total Devices : 2
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Thu Oct 28 14:01:11 2004
State : dirty, no-errors
Active Devices : 1
Working Devices : 2
Failed Devices : 0
Spare Devices : 1
Number
Major
Minor
RaidDevice State
0
8
32
0
active sync
1
8
48
1
spare
/dev/sdc
/dev/sdd
UUID : bec54c58:346c9fb5:070e9f7a:8ceecb77
Events : 0.4
Allvac
Confidential & Proprietary
3/6/2016
Page 15
The line starting with UUID is the ID for this particular array. This value will be
entered in the /etc/mdadm.conf file to verify and start the correct array. The entry
can be added anywhere in the file in this format:
“ARRAY /dev/md0 level=multipath num-devices=1 UUID=<UUID>”
When the host reboots, it will automatically start the /dev/md0 device. This can be
verified by running the `/sbin/mdadm –D /dev/md0` and analysing the results.
If when the host reboots the MD device is not started (Common if you had to add the
scsi options in the /etc/rc.local) an assembly command can be issued in the
/etc/rc.local file to manually add the entry on reboot. This command would be like:
/sbin/mdadm –A --scan
Allvac
Confidential & Proprietary
3/6/2016
Page 16
Oracle: Operating System Resources/Configuration
REDHAT LINUX
Account Creation/Authentication
An Oracle account needs to be created on each node to ensure connectivity and to facilitate the
install and configuration of the Oracle software. The Oracle software install will NOT even start
without this account.
We will be creating the account with the same UID and GID as the existing Oracle accounts on
other Unix machines.
Step 1: Create account
Add the following entry to the /etc/passwd file:
oracle:x:201:200:Oracle User Account:/export/home/oracle:/bin/ksh
Run the `pwconv` process to synchronize the /etc/shadow and passwd files.
`/sbin/pwconv`
Step 2: Create Group
Create the /etc/group entry by pasting the following line into the /etc/group file:
dba:x:200:
Step 3: Create home directory
Next we must create the /export/home/oracle directory and get it ready for the
DBA’s to configure the Oracle user environment:
`mkdir –p /export/home/oracle ; chown –R oracle:dba
/export/home/oracle`
Allvac
Confidential & Proprietary
3/6/2016
Page 17
OCFS Install/Configure
For RAC to maintain writes to the same datafiles on multiple nodes, OCFS (Oracle Clustered File
System) is needed to ensure the integrity of the data being accessed. Without this functionality
the files being written do would become corrupt the instant multiple nodes performed write
operations to the same file.
Other services (NFS, CIFS, etc…) provide this functionality, but become weighed down by CPU
overhead or network bottlenecks. We will be using SAN based resources, this requires the use of
the clustered filesystem.
Below is detailed the steps to correctly install and configure OCFS for the Allvac RAC installation.
Step 1: Mount source location
To begin we need to obtain the source for OCFS. This source is currently located on
alvmnrlnx003 in the /export/kickstart/cluster directory. We will mount this volume
to the /mnt/tmp_mnt directory.
Start by verifying if there is a /mnt/tmp_mnt directory. If there is not, create the
directory:
`mkdir /mnt/tmp_mnt`
We can now mount the source volume to the newly created directory:
`mount alvmnrcfsr03:/export/opt/cluster /mnt/tmp_mnt`
Step 2: Compile Source
To ensure we are getting a clean install of the OCFS tools we will compile the source
to the configuration of the host we are on.
Change to the cluster directory that was mounted in the previous step.
`cd /mnt/tmp_mnt/`
We are using OCFS v1.0.13 for our services at Allvac. Change into the source
directory at this point
`cd ocfs-1.0.13`
We need to generate a clean .configure file for our compilation, to do this type:
`make clean`
Now we can generate the .configure file by running:
`./configure`
loading cache ./config.cache
checking host system type... i686-pc-linux-gnu
checking for gcc... (cached) gcc
checking whether the C compiler (gcc
Allvac
Confidential & Proprietary
) works... yes
3/6/2016
Page 18
checking whether the C compiler (gcc
) is a cross-compiler... no
checking whether we are using GNU C... (cached) yes
checking whether gcc accepts -g... (cached) yes
checking how to run the C preprocessor... (cached) gcc -E
checking for a BSD compatible install... (cached) /usr/bin/install -c
checking whether ln -s works... (cached) yes
checking for ranlib... (cached) ranlib
checking for ar... (cached) /usr/bin/ar
checking for ANSI C header files... (cached) yes
checking for working const... (cached) yes
checking for debugging... no
checking for aio... yes
checking for memory debugging... no
checking for tracing... yes
checking for large ios... auto
checking for directory with kernel source... /lib/modules/2.4.2120.ELsmp/build
checking for kernel version... 2.4.21-20.ELsmp
checking for Red Hat kernel... yes
checking whether to build aio... yes
checking for NPTL support... yes
checking for kernel module symbol versions... yes
checking for directory for kernel modules... /lib/modules/2.4.2120.ELsmp/kernel/fs
checking for gcc include path... /usr/lib/gcc-lib/i386-redhatlinux/3.2.3/include
creating ./config.status
creating Config.make
creating vendor/redhat/ocfs-2.4.9-e.spec
creating vendor/redhat/ocfs-2.4.18-e.spec
creating vendor/redhat/ocfs-2.4.21-EL.spec
creating vendor/unitedlinux/ocfs-2.4.19-64GB-SMP.spec
creating vendor/unitedlinux/ocfs-2.4.19-4GB-SMP.spec
creating vendor/unitedlinux/ocfs-2.4.19-4GB.spec
creating vendor/unitedlinux/ocfs-2.4.21.spec-generic
When the configure process is complete (Should exit with no errors), we can now
compile and install the software:
`make ; make install`
Allvac
Confidential & Proprietary
3/6/2016
Page 19
To validate the install process, we can do a directory listing of the module directory
and see if the OCFS module has been compiled.
`ls -al /lib/modules/2.4.21-20.ELsmp/kernel/fs/ocfs.o`
-rw-r--r-1 root
root
430863 Nov 18 15:07
/lib/modules/2.4.21-20.ELsmp/kernel/fs/ocfs.o
Your results should be similar. The presence of the ocfs.o file confirms the module
was compiled and installed correctly. Your directory may vary depending on the
kernel version you are using. In our case 2.4.21-20.Elsmp is the running kernel.
Next we need to build the OCFS-Tools necessary to configure the resource. This is
completed in much the same way as the module (In fact, most compiles are done
the same way).
Change to the ocfs-tools directory
`cd /mnt/tmp_mnt/ocfs-tools-1.1.3`
Run the process identified earlier:
`make clean`
‘./configure`
`make ; make install`
To verify the install completed correctly:
`ls -al /usr/local/sbin/ocfs_uid_gen`
-rwxr-xr-x
1 root root 9737 Nov 18 15:18 /usr/local/sbin/ocfs_uid_gen
This result confirms the OCFS-Tools have been installed.
Allvac
Confidential & Proprietary
3/6/2016
Page 20
Step 4: Setup OCFS configuration
We are now at the point where we can configure the newly installed OCFS system.
This process is straight forward, several configuration files have even been created
and can be copied to be used in your install.
Copy the /mnt/tmp_mnt/ocfs.conf file to /etc/ocfs.conf
`cp /mnt/tmp_mnt/ocfs.conf /etc/ocfs.conf`
Edit this file and change the values marked with the ‘< >’.
In our example, change:
ip_address=<PRIVATE IP OF THIS NODE>
ip_port=9999
node_name=<PRIVATE HOSTNAME OF THIS NODE>
To:
ip_address=192.168.0.1
ip_port=9999
node_name=ora-cluster-1
This information is available from the /etc/hosts file for the node you are working on.
Next we need to generate the UID for this OCFS Node. This step is important for
when the host mounts an OCFS disk. This UID will be used to identify each node,
and when it connects, or uses the volume.
Run:
`/usr/local/sbin/ocfs_uid_gen –c`
To validate this, cat the /etc/ocfs.conf file. There should now be a guid entry at the
bottom the file:
cat /etc/ocfs.conf
ip_address=192.168.0.2
ip_port=9999
node_name=ora-cluster-2
guid = E3230E1382241995C107001185639706
With this completed we are now ready to start the service. This process will copy a
startup script to the /etc/init.d directory and activate this script for execution during
bootup.
Allvac
Confidential & Proprietary
3/6/2016
Page 21
From the /mnt/tmp_mnt directory, copy the `ocfs` file to the /etc/init.d directory.
`cp /mnt/tmp_mnt/ocfs /etc/init.d/ocfs`
Next run `chkconfig` to add the service on bootup:
`/sbin/chkconfig –level 345 ocfs on`
This will enable OCFS on runlevel 3, 4, and 5.
Now we can start the OCFS module, and continue to format our OCFS volumes.
`/etc/init.d/ocfs start`
You should see the [OK] message return to validate.
Step 5: Create OCFS FileSystems (Only to be done on ONE node)
The final step to complete the OCFS installation is to format the disks we intend to
use as the OCFS volumes with an OCFS format.
This is completed by running the following command:
`mkfs.ocfs –b 1024 –C –F –g <GID to DB group> -u <UID to Oracle user> L <Label for the volume> -m <mount path> /dev/md<device to format>`
Our example is:
`mkfs.ocfs -b 1024 -C -F -g 200 -u 201 -L /u01 -m /u01 /dev/md0`
It does take some time to format the LUNs. The results will look similar:
Cleared volume header sectors
Cleared node config sectors
Cleared publish sectors
Cleared vote sectors
Cleared bitmap sectors
Cleared data blocks
Wrote root directory and system files
Updated global bitmap
Wrote volume header
Repeat this step for each OCFS volume to be mounted.
Allvac
Confidential & Proprietary
3/6/2016
Page 22
Step 6: Mount OCFS Volumes
OCFS volumes are mounted the same way regular file systems are. We will specify
an option to mount manually, and make a change to the /etc/fstab to prepare the
mount for reboots.
The first step to mounting these volumes is to determine where we want them
mounted. Based on the configuration and installation requirements we have chosen
the following mounts for our three volumes:



/u01 = /dev/md0
/u02 = /dev/md1
/u03 = /dev/md2
With these values known, we can begin to mount the volumes.
Our first step will be to add the entries to the /etc/fstab file. To begin, open the
/etc/fstab for editing. Create a new entry formatted similar to this.
/dev/md0
/u01
ocfs
_netdev
1 2
Repeat this entry for each volume you require to be mounted as OCFS.
To ensure these volumes will be mounted at boot; add the following entry to the end
of the /etc/rc.local script:
/sbin/mount –a –t ocfs
Step 7: Reboot
Allvac
Confidential & Proprietary
3/6/2016
Page 23
External Resource Changes (DNS, Network, RSH, etc…)
In order to complete the RAC and DB install additional services are needed to perform the host to
host data transfer. In a perfect world, the data transfer would be handled via and SSH connection.
However this is not possible on the current release of Oracle. The R-Tools are still needed to
complete the transfer.
Part of the default system build is the RPM for the RSH-Server. This package includes the tools
rsh, rlogin, rexec, and rcp.
In order for the Oracle account to utilize these tools without a password a system configuration
change is needed. Below are the steps to complete the process.
Step 1: Enable R-Tools
R-Tools will need to be enabled within the xinetd configuration. This process is
completed with the following commands:
`/sbin/chkconfig rsh on ; chkconfig rlogin on ; chkconfig rexec on`
Step 2: Setting up hosts.equiv
In order for users to use the R-Tools without being prompted for a password we will
need to setup and access configuration specifying what host and user can login.
Until this process is completed users will be prompted for passwords when the tools
are used to connect to remote hosts.
We are concerned with configuring the Oracle user. At this time, only the Oracle
user should be able to use the R-Tools without being prompted for a password, and it
should only be to the necessary RAC nodes. This process is completed by making the
following entries in the /etc/hosts.equiv file:
+alvmnrlnx001
oracle
+alvmnrlnx002
oracle
+ora-cluster-1
oracle
+ora-cluster-2
oracle
These entries need to be added to ALL nodes that the Oracle user will need to
connect to without a password. In this example we specified two nodes (and their
associated private node name) that the oracle user can connect to without a
password using the R-Tools.
This can be tested by switching to the oracle user and typing:
`rsh <node name in /etc/hosts.equiv>`
Allvac
Confidential & Proprietary
3/6/2016
Page 24
Oracle: Install/Configure
RAC
DATABASE SOFTWARE
DBCA: DATABASE CREATION
WAREHOUSE BUILDER
Allvac
Confidential & Proprietary
3/6/2016
Page 25
Appendix A. Configuration Information
Current RAC Instances
VIP Information
Allvac
Confidential & Proprietary
3/6/2016
Page 26
Glossary
This section lists the acronyms used in this document and their respective meanings.
Acronym
Allvac
Confidential & Proprietary
Definition
3/6/2016
Page 27
Download