2. MC/ServiceGuard hos USIT

advertisement
MC/ServiceGuard Implementering
USIT
ServiceGuard Implementation
Author
Engineering Services
Arild Landsem, HP Support Norge
Contents:
Contents: ............................................................................................................................................................2
1. What is MC/ServiceGuard ? ..........................................................................................................................3
Failover: .......................................................................................................................................................................... 4
MC/ServiceGuard architecture. ................................................................................................................................... 6
Cluster manager ............................................................................................................................................................ 6
Cluster configuration .................................................................................................................................................... 6
Manual startup .............................................................................................................................................................. 7
Automatic startup ......................................................................................................................................................... 7
Dynamic cluster reformation ....................................................................................................................................... 7
Cluster Quorum ............................................................................................................................................................. 8
Network Manager ....................................................................................................................................................... 11
2. MC/ServiceGuard hos USIT ........................................................................................................................12
Kort beskrivelse av løsningen ............................................................................................................................. 12
Beskrivelse av utstyret ........................................................................................................................................... 12
Redundans ................................................................................................................................................................. 14
Maskinvare konfigurasjon .......................................................................... Ошибка! Закладка не определена.
Nettverkskonfigurasjon ........................................................................................................................................ 15
Diskkonfigurasjon .................................................................................................................................................. 16
Fysisk konfigurering av disker ........................................................................................................................... 16
3. Cluster status ...............................................................................................................................................20
Troubleshooting: ......................................................................................................................................................... 22
Logfiles: .................................................................................................................................................................... 22
Package log:............................................................................................................................................................. 22
Cluster verification: ................................................................................................................................................. 22
4. File lists: ......................................................................................................................................................23
Cluster configuration: ................................................................................................................................................. 23
Package configuration: ............................................................................................................................................... 23
Package control script: ............................................................................................................................................... 23
5. MC/ServiceGuard Commands ....................................................................................................................24
Appendix A Konfigurering av FC60 diskarray .......................................................................................31
Appendix B Felles volumgrupper ............................................................................................................36
Appendix C NFS filsystemer for MC/ServiceGuard pakker kant og hume .......................................38
Appendix D Konfigurasjonsfiler ..............................................................................................................39
USIT
10.04.20
Side 217
ServiceGuard Implementation
Engineering Services
1. What is MC/ServiceGuard ?
MC/ServiceGuard allows you to create high availability clusters of HP9000 Series 800 computers. A high
availability computer system allows application services to continue in spite of a hardware or software
Failure. Highly available systems protect users from software failures as well as from failure of a system
processing unit (SPU), disk, or local area network (LAN) component. In the event that one component fails,
the redundant component takes over. MC/ServiceGuard and other high availability subsystems coordinate the
transfer between components.
An MC/ServiceGuard cluster is a networked grouping of HP 9000 series 800 servers (host systems known as
nodes) having sufficient redundancy of software and hardware that a single point of failure will not
significantly disrupt service. Application services (individual HP-UX processes) are grouped together in
packages; in the event of a single service, node, network, or other resource failure, MC/ServiceGuard can
automatically transfer control of the package to another node within the cluster, allowing services to remain
available with minimal interruption.
Figure 1-1 shows a typical MC/ServiceGuard cluster with two nodes.
NODE A
NODE B
pkg A disk
root
root
pkg A mirror
pkg A
pkg B
pkg B disk
pkg B mirror
Ethernet
Hub
Ethernet
Monitor
Monitor
Monitor
Keyboard
Keyboard
Keyboard
In the figure, node1 (one of two SPU’s) is running package A, and node2 is running package B. Each package
has a separate group of disks associated with it, containing data needed by the package’s applications, and a
mirror copy of the data. Note that both nodes are physically connected to both groups of mirrored disks.
However, only one node at a time may access the data for a given group of disks. In the figure, node1 is
shown with exclusive access to the top two disks (solid line), and node2 is shown as connected without access
to the top disks (dotted line). Similarly, node 2 is shown with exclusive access to the bottom two disks (solid
line), and node 1 is shown as connected without access to the bottom disks (dotted line). Mirror copies of
data provide redundancy in case of disk failures. In addition, a total of four data buses are shown for the disks
USIT
10.04.20
Side 317
ServiceGuard Implementation
Engineering Services
that are connected to node 1 and node 2. This configuration provides the maximum redundancy and also
gives optimal I/O performance, since each package is using different buses.
Note that the network hardware is cabled to provide redundant LAN interfaces on each node.
MC/ServiceGuard uses TCP/IP network services for reliable communication among nodes in the cluster,
including the transmission of heartbeat messages, signals from each functioning node, which are central to
the operation of the cluster.
Failover:
Under normal conditions, a fully operating MC/ServiceGuard cluster simply monitors the health of the
cluster's components while the packages are running on individual nodes. Any host system running in the
MC/ServiceGuard cluster is called an active node. When you create the package, you specify a primary node
and one or more adoptive nodes. When a node or its network communications fails, MC/ServiceGuard can
transfer control of the package to the next available adoptive node.
This situation is shown in Figure 1-2.
NODE A
NODE B
pkg A disk
root
root
pkg A mirror
pkg B
pkg B disk
pkg A
pkg B mirror
Ethernet
Hub
Ethernet
Monitor
Monitor
Monitor
Keyboard
Keyboard
Keyboard
After this transfer, the package remains on the adoptive node as long the adoptive node continues running,
even if the primary node comes back online. In situations where the adoptive node continues running
USIT
10.04.20
Side 417
ServiceGuard Implementation
Engineering Services
successfully, you may manually transfer control of the package back to the primary node at the appropriate
time. In certain circumstances, in the event of an adoptive node failure, a package that is running on an
Adoptive node will switch back automatically to its primary node (assuming the primary node is running again
as a cluster member).
USIT
10.04.20
Side 517
ServiceGuard Implementation
Engineering Services
MC/ServiceGuard architecture.
Packages
Apps/Service/Resources
Package Manager
MC/ServiceGuard
Components
Cluster Manager
Network Manager
Operating System
HP-UX Kernel (with LVM)
Figure 2-1. MC Software komponenter.
Cluster manager
The cluster manager is used to initialize a cluster, to monitor the health of the cluster, to recognize node
failure if it should occur, and to regulate the re-formation of the cluster when a node joins or leaves the
cluster. The cluster manager operates as a daemon process that runs on each node. During cluster startup
and re-formation activities, one node is selected to act as the cluster coordinator. Although all nodes perform
some cluster management functions, the cluster coordinator is the central point for inter-node
communication.
Cluster configuration
The system administrator sets up cluster configuration parameters and does an initial cluster startup;
thereafter, the cluster regulates itself without manual intervention in normal operation. Configuration
USIT
10.04.20
Side 617
ServiceGuard Implementation
Engineering Services
parameters for the cluster include the cluster name and nodes, networking parameters for the cluster
heartbeat, cluster lock disk information, and timing parameters (discussed in detail in the
“Planning” chapter). Cluster parameters are entered using SAM or by editing an ASCII cluster configuration
template file. The parameters you enter are used to build a binary configuration file, which is propagated to
all nodes in the cluster. This binary cluster configuration file must be the same on all the nodes in the cluster.
Manual startup
A manual startup forms a cluster out of all the nodes in the cluster configuration. Manual startup is normally
done the first time you bring up the cluster, after cluster-wide maintenance or upgrade, or after
reconfiguration.
Before startup, the same binary cluster configuration file must exist on all nodes in the cluster. The system
administrator starts the cluster in SAM or with the cmruncl command issued from one node. The cmruncl
command can only be used when the cluster is not running, that is, when none of the nodes is running the
cmcld daemon.
Automatic startup
An automatic cluster restart occurs when all nodes in a cluster have failed. This is usually the situation when
there has been an extended power failure and all SPUs went down. In order for an automatic cluster restart to
take place, all nodes specified in the cluster configuration file must be up and running, must be trying to form
a cluster, and must be able to communicate with one another. Automatic cluster restart will take place if the
flag AUTOSTART_CMCLD is set to 1 in the /etc/rc.config.d/cmcluster file.
Dynamic cluster reformation
A dynamic re-formation is a temporary change in cluster membership that takes place as nodes join or leave a
running cluster. Re-formation differs from reconfiguration, which is a permanent modification of the
configuration files. Re-formation of the cluster occurs under the following conditions:








An SPU or network failure was detected on an active node.
An inactive node wants to join the cluster. The cluster manager daemon has been started on that node.
A node has been added to or deleted from the cluster configuration.
The system administrator halted a node.
A node halts because of a package failure.
A node halts because of a service failure.
Heavy network traffic prohibited the heartbeat signal from being received by the cluster.
The heartbeat network failed, and another network is not configured to carry heartbeat.
Typically, re-formation results in a cluster with a different composition. The new cluster may contain fewer or
more nodes than in the previous incarnation of the cluster.
USIT
10.04.20
Side 717
ServiceGuard Implementation
Engineering Services
Cluster Quorum
The algorithm for cluster re-formation generally requires a cluster quorum of a strict majority (that is, more
than 50%) of the nodes previously running. However, exactly 50% of the previously running nodes may reform as a new cluster provided there is a guarantee that the other 50% of the previously running nodes do
not also re-form. In these cases, a tiebreaker is needed. For example, if there is a communication failure
between the nodes in a two-node cluster, and each node is attempting to re-form the cluster, then
MC/ServiceGuard only allows one node to form the new cluster. Using a cluster lock ensures this.
USIT
10.04.20
Side 817
ServiceGuard Implementation
Engineering Services
Package Manager
Each node in the cluster runs an instance of the package manager; the package manager residing on the
cluster coordinator is known as the package coordinator.
The package coordinator does the following:
 Decides when and where to run, halt or move packages.
The package manager on all nodes does the following:
 Executes the user-defined control script to run and halt packages and package services.
 Reacts to changes in the status of monitored resources.
Deciding when and Where to Run and Halt Packages
Each package is separately configured by means of a package configuration file, which can be edited
manually or through SAM. This file assigns a name to the package and identifies the nodes on which the
package can run, in order of priority. It also indicates whether or not switching is enabled for the package,
that is, whether the package should switch to another node or not in the case of a failure. There may be many
Applications in a package. Package configuration is described in detail in the chapter “Configuring Packages
and their Services.”
Starting the Package and Running Application Services
After a cluster has formed, the package manager on each node starts up packages on that node. Starting a
package means running individual application services on the node where the package is running.
To start a package, the package manager runs the package control script with the 'start' parameter. This script
performs the following tasks:
 uses Logical Volume Manager (LVM) commands to activate volume groups needed by the package.
 mounts filesystems from the activated volume groups to the local node.
 uses cmmodnet to add the package's IP address to the current network interface running on a configured
subnet. This allows clients to connect to the same address regardless of the node the service is running
on.
 uses the cmrunserv command to start up each application service configured in the package. This
command also initiates monitoring of the service.
 executes a set of customer-defined run commands to do additional processing, as required.. While the
package is running, services are continuously monitored. If any part of a package fails, the package halt
instructions are executed as part of a recovery process. Failure may result in simple loss of the service, a
restart of the service, transfer of the package to an adoptive node, or transfer of all packages to adoptive
nodes, depending on the package configuration. In package transfers, MC/ServiceGuard sends a TCP/IP
packet across the heartbeat subnet to the package's adoptive node telling it to start up the package.

NOTE
When applications run as services in a MC/ServiceGuard package, you do not start them directly;
instead, the package manager runs packages on your behalf either when the cluster starts or when a
package is enabled on a specific node. Similarly, you do not halt an individual application or service
directly once it becomes part of a package. Instead you halt the package or the node.
USIT
10.04.20
Side 917
ServiceGuard Implementation
USIT
10.04.20
Engineering Services
Side 1017
ServiceGuard Implementation
Engineering Services
Stopping the Package
The package manager is notified when a command is issued to shut down a package. In this case, the
package control script is run with the ’stop’ parameter. For example, if the system administrator chooses “Halt
Package” from the “Package Administration” menu in SAM, the package manager will stop the package.
Similarly, when a command is issued to halt a cluster node, the package manager will shut down all the
packages running on the node, executing each package control script with the 'stop' parameter. When run
with the 'stop' parameter, the control script:




uses cmhaltserv to halt each service.
unmounts filesystems that had been mounted by the package.
uses Logical Volume Manager (LVM) commands to deactivate volume groups used by the package.
uses cmmodnet to delete the package's IP address from the current network interface.
Network Manager
The purpose of the network manager is to detect and recover from network card and cable failures so that
network services remain highly available to clients. In practice, this means assigning IP addresses for
each package to the primary LAN interface card on the node where the package is running and monitoring
the health of all interfaces, switching them when necessary.
USIT
10.04.20
Side 1117
ServiceGuard Implementation
Engineering Services
2. MC/ServiceGuard hos USIT
Kort beskrivelse av løsningen
USIT har installert to stk HP 9000 A-klasse maskiner som skal være multiprotokoll filservere (NFS
og CIFS) for universitetets studenter. For å oppnå en høyere grad av tilgjengelighet for filservertjenesten, er produktet MC/ServiceGuard tatt i bruk. Maskinene fkant og fhume, som normalt
kjører hver sin tjeneste, hhv tjenestene kant og hume, vil gjensidig overvåke hverandre, og overta
tjenesten dersom den andre maskinen feiler.
Beskrivelse av utstyret
MC/ServiceGuard cluster hos USIT består av to HP 9000 A-klasse servere og
en HP Model FC60 disk array. Serverne er koblet til disk-arrayet via FiberChannel
kontrollere og FiberChannel hubs. Alle enheter er montert i 2m rack.
Servere: 2 x HP 9000 A500 Enterprise Server, begge konfigurert med:
Hardware
2 x 440MHz PA8500 Cpu
2GB RAM
2 x 18GB HotPlug Ultra SCSI LP disk
1 x Dual FibreChannel kontroller (PCI)
1 x Gigabit Ethernet (PCI)
1 x 10/100Mbit Ethernet (PCI)
1 x 10/100Mbit internal
1 x DVD ROM
Software
HP-UX 11.0 64-bit
MirrorDisk/UX
OV Glance+Pak
OnLine JFS
MC/ServiceGuard
MC/ServiceGuard NFS Toolkit
Storage: 1 x SureStore E Disk Array FC 60, konfigurert med :
Dual Controller med 256MB Cache
6 x SureStore E Disk Sys. med duplisert BCC
31 x 36GB 10K RPM LVD Disk Drives
Tilkobling for storage : 2 x 10 port Short Wave Fibre Channel HUB
USIT
10.04.20
Side 1217
ServiceGuard Implementation
Engineering Services
Fysisk plassering av utstyret.
studserver-fc2
DVD fhume
fkan
t
SmartStore
studserver-fc1
DVD fkant
fhume
Disk array
FC60
USIT
10.04.20
Side 1317
ServiceGuard Implementation
Engineering Services
Maskinvare konfigurasjon
For å oppnå maksimal oppetid i et MC/ServiceGuard cluster, må redundans og feiltolerans bygges
inn på alle nivåer i løsningen. Maskinene må konfigureres riktig hardwaremessig for å oppnå
dette, og forskjellige produkter i tillegg til MC/ServiceGuard må tas i bruk.
Feiltolerans gjennom redundans løses på forskjellige nivå, og av forskjellige produkter, som vist i
tabell:
Komponent
Power Supply, Server
Power Supply, FC60
Fibre Channel kontroller
Boot disk
Disk Mech, FC60
Fibre Channel HUB
CPU, memory, backplane
Nettverkskontroller
Redundans
Extra Power Supply (N + 1)
Extra Power Supply (N + 1)
Alternate Path ( OS funksjonalitet)
Speildisk (MirrorDisk/UX)
RAID-5
Alternate Path (OS funksjonalitet)
Tjenesten flyttes til annen maskin (MC/ServiceGuard)
IP-konfig flyttes til standby nettverkskontroller
(MC/ServiceGuard)
Tabell 2-1 redundans
Serverne er koblet til felles disker gjennom dupliserte FibreChannel kontrollere. Nettverkskabler
for redundante nettverk er koblet til felles switch. Figuren viser hvordan maskinene er koblet slik
at ingen enkelt-komponent for storage elle nettverk skal forårsake nedetid.
1Gbit data lan
100Mbit standby lan
LAN 2
LAN 1
LAN 0
LAN 2
LAN 1
c9
A
c10
Fibre Channel
Arbitrated Loop Hub
FC
c9
c10
fhume
A500
FC
FC
fkant
A500
FC
LAN 0
10/ 100Mb intern lan
Fibre Channel
Arbitrated Loop Hub
B
target 0
lun 0
180GB
target 0
lun 0
180GB
target 0
lun 0
180GB
target 0
lun 0
180GB
target 0
lun 0
180GB
FC60 Disk Array
Konfig: 5 luns á 180GB
Fig 2.2 Maskinvare oppsett
USIT
10.04.20
Side 1417
ServiceGuard Implementation
Engineering Services
Nettverkskonfigurasjon
Hver av maskinene i clusteret har 3 nettverkskontrollere: et 1Gb for data/produksjon, et 100Mb
som står som standby for data/produksjon, og det innebygde 10/100Mb, som brukes som
dedikert hertbeat. MC/SG kjører i tillegg heartbeat på 1Gb nettet. Konfigurasjonen er illustrert på
følgende figur:
129.240.130.
1Gbit data lan
Heartbeat
100Mbit standby lan
10.0.0.
10/ 100Mb intern lan
Heartbeat
LAN 2
0/ 6/ 0/ 0
LAN 1
0/ 2/ 0/ 0
fkant
A500
LAN 0
0/ 0/ 0/ 0
.23 .23
LAN 2
0/ 6/ 0/ 0
.21
LAN 1
0/ 2/ 0/ 0
LAN 0
0/ 0/ 0/ 0
.21
fhume
A500
Fig. 2-3
Nettverkskonfigurasjon
maskin
fkant
fhume
nettverk
lan0
lan1
lan2
lan0
lan1
lan2
IP-adresse
10.0.0.21
129.240.130.21
10.0.0.23
129.240.130.23
Standby for
lan1
lan1
Heartbeat




Tabell 2-2
Nettverksinterface
USIT
10.04.20
Side 1517
ServiceGuard Implementation
Engineering Services
Diskkonfigurasjon
Begge maskinene i clusteret har aksess til de samme disker gjennom identisk konfigurering av
volumgrupper på systemene. Alle felles diskressurser må være konfigurert med LVM.
MC/ServiceGuard benytter funksjonalitet i ”Cluster LVM”, og sikrer maskinene eksklusiv tilgang til
felles volumgrupper. Felles diskressurser i clusteret er en stk FC60, aksessert via FC hubs. Den
fysiske kablingen, fra dupliserte FC kontrollere på maskinen, via dupliserte HUBs, til dupliserte
kontrollere på arrayet, sammen med ”Alternate Path”-funksjonalitet i LVM, gir en feiltolerant
tilkobling til storage. Diskarrayet har innebygget feiltolerans ved at alle komponenter er
redundante.
Fysisk konfigurering av disker
Arrayet er konfigurert med 31 fysiske disker, fordelt med 5 disker på hver disk-enclosure + en
global hot spare disk plassert i enclosure 1. De fysiske diskene er videre slått sammen i raidgrupper, eller ”LUNs”. De konfigurerte LUNs er gitt target og lun adresser som presenteres ut på
de to FibreChannel portene.
I den initielle konfigurasjonen er det opprette 5 LUNs, hver bestående av 6 x 36GB disker og
konfigurert med RAID-5. Dette gir 5 x 180GB LUNs, som maskinene oppfatter som 5 x 180GB
fysiske disker. Detaljert og oppdater dokumentasjon følger i appendix A.
FC c9
0/ 4/ 0/ 0
fhume
A500
FC c10
0/ 6/ 2/ 0
FC c9
0/ 4/ 0/ 0
fkant
A500
Fibre Channel
Arbitrated Loop Hub
FC c10
0/ 6/ 2/ 0
Maskinene har aksess til de samme diskene (FC60 LUNs) via FC. Følgende figur viser fysisk kobling:
Fibre Channel
Arbitrated Loop Hub
A
B
target 0
lun 0
180GB
target 0
lun 0
180GB
target 0
lun 0
180GB
target 0
lun 0
180GB
target 0
lun 0
180GB
FC60 Disk Array
Konfig: 5 luns á 180GB
LVM
Fig 2-4 Fysisk tilkobling av
diskarray
konfigurasjon
Initiell LVM-konfigurasjon for felles volumgrupper er gjort ved at én volumgruppe er opprettet pr
disk (180GB FC60 LUN). Volumgruppene er opprettet på fkant, for deretter å bli importert til
fhume. Volumgruppene har samme navn og minor-nummer, ihht krav fra MC/SG. Detaljert og
oppdatert dokumentasjon følger i appendix B.
USIT
10.04.20
Side 1617
ServiceGuard Implementation
Engineering Services
Cluster konfigurasjon
Basis cluster-konfigurasjon er definert i clusterets konfigurasjonsfil,
/etc/cmcluster/cmclconf.ascii.
Som god praksis anbefales at alle konfigurasjonsfiler vedlikeholdes på maskinen fkant, for deretter
å distribueres til fhume. Utskrift av cmclconf.ascii finnes i appendix D, men for garantert oppdatert
informasjon, bør man sjekke filen på fkant. Dersom det er tvil om ascii-filen er riktig ihht
”running cluster”, kan filen alltid genereres fra det binære clusteret vha kommandoen
cmgetconf(1M).
Initiell cluster konfigurasjon:
Cluster navn
Node
Node
Cluster Lock Volume Group
Cluster Lock Disk fkant
Cluster Lock Disk fhume
Heartbeat Interval
Heartbeat Timeout
Autostart Timeout
Network Polling Interval
Nettverk, heartbeat og standby
Felles volumgrupper
USIT
10.04.20
cluster1
fkant
fhume
vgkant01
/dev/dsk/c9t0d0
/dev/dsk/c9t0d0
1 sek
8 sek
10 min
2 sek
se tabell 2-2
se tabell, Appendix B
Side 1717
ServiceGuard Implementation
Engineering Services
Pakkekonfigurasjon
Overordnet funksjonalitet
Pakkene som er konfigurert på clusteret har som hovedfunksjon å levere filservertjenester basert
på NFS og CIFS. Pakkene er konfigurert med utgangspunkt i HP’s scripts for Highly Available NFS
og Highly Available CIFS.
Scriptene er laget ved å ekstrahere NFS-spesifikke funksjoner fra NFS-scriptet, og legge dette inn i
CIFS-scriptet. Deretter er kunde-spesifikk informasjon lagt til.
USIT kunne ikke bruke HPs forslag til NFS Crossmount, som baserer seg på både lokal og remote
NFS-montering av filsystemer. Dette vil ikke fungere fordi USIT ønsker å kunne eksportere alle
clusterets filsystemer fra en og samme (fysiske) maskin. Dette er løst ved å bruke symbolske
linker. Det egentlige aksesspunktet blir nå en symbols link, som peker til lokale eller remote
volumer, avhengig av hvor pakken befinner seg. Pakkescriptet vil kontrollere linken ved at den
opprettes og fjernes sammen med start og stopp av pakken. Detaljert oversikt over linkene finnes
i appendix C.
Pakken kant
Pakken kant er definert ved filene : /etc/cmcluster/kant/kant.conf , og
/etc/cmcluster/kant/kant.cntl
Utskrift av scriptene følger vedlagt i appendix D
Pakken kant har følgende egenskaper:
Pakkenavn
primær node
sekundær node
Service
Kritisk subnett for pakken
FAILOVER_POLICY
FAILBACK_POLICY
PKG_SWITCHING_ENABLED
NET_SWITCHING_ENABLED
NODE_FAILFAST_ENABLED
EMS RESOURCES
Relokerbar IP-adresse
Volumgruppe
Filsystem
USIT
10.04.20
kant
fkant
fhume
-ingen service definert
129.240.130.0
CONFIGURED_NODE
MANUAL
YES
YES
NO
-ingen definert
129.240.130.24
vgkant01
vgkant02
vgkant03
logisk volum
monteringspunkt
/dev/vgkant01/usi /uio/kant/fu1
t1
Side 1817
ServiceGuard Implementation
Engineering Services
Pakken hume
Pakken hume er definert ved filene : /etc/cmcluster/hume/hume.conf , og
/etc/cmcluster/hume/hume.cntl
Utskrift av scriptene følger vedlagt i appendix D
Pakken hume har følgende egenskaper:
USIT
Pakkenavn
primær node
sekundær node
Service
Kritisk subnett for pakken
FAILOVER_POLICY
FAILBACK_POLICY
PKG_SWITCHING_ENABLED
NET_SWITCHING_ENABLED
NODE_FAILFAST_ENABLED
EMS RESOURCES
Relokerbar IP-adresse
Volumgruppe
kant
fhume
fkant
-ingen service definert
129.240.130.0
CONFIGURED_NODE
MANUAL
YES
YES
NO
-ingen definert
129.240.130.24
vghume01
vghume02
Filsystem
logisk volum
/dev/vghume01/lvol
1
10.04.20
monteringspunkt
/uio/hume/fu1
Side 1917
ServiceGuard Implementation
Engineering Services
3. Cluster status
Status of the cluster can be obtained with SAM or with the cmviewcl command. This gives a short status of
nodes, packages and network
CLUSTER
cluster1
NODE
primary
PACKAGE
pkg1
NODE
primary
PACKAGE
appl
Cluster= cluster 1
is up and running
STATUS
up
STATUS
up
STATUS
up
STATUS
up
STATUS
up
STATE
running
STATE
running
primary up and
running
PKG_SWITCH
enabled
NODE
primary
PKG_SWITCH
disabled
NODE
primary
STATE
running
STATE
running
package: pkg1 is
running on primary
if primary or pkg1
is down the
package will be
moved to primary
node 2 is up and
running
package: appl is up
and running on
primary
if primary or appl
is down the
package will NOT
be moved to
primary
USIT
10.04.20
Side 2017
ServiceGuard Implementation
Engineering Services
More detailed status can be obtained with SAM or with cmviewcl –v:
CLUSTER
cluster1
NODE
primary
Cluster= cluster 1
is up and running
STATUS
up
STATUS
up
primary up and
running
STATE
running
Network_Parameters:
INTERFACE
STATUS
PRIMARY
up
STANDBY
up
PRIMARY
up
PATH
10/4/8
10/16/8
10/12/6
NAME
lan1
lan2
lan0
PACKAGE
pkg1
STATE
running
PKG_SWITCH
enabled
STATUS
up
Script_Parameters:
ITEM
STATUS MAX_RESTARTS
Subnet
up
RESTARTS
Node_Switching_Parameters:
NODE_TYPE
STATUS
SWITCHING
Primary
up
enabled
(current)
Alternate
up
enabled
NODE
primary
STATUS
up
NAME
131.115.154.0
primary
pkg1 is currently
running on node 1
and will be move
to primary is
primary or pkg1
fails
PATH
10/4/8
10/12/6
10/16/8
NAME
lan1
lan0
lan2
PACKAGE
appl
STATE
running
PKG_SWITCH
disabled
RESTARTS
Node_Switching_Parameters:
NODE_TYPE
STATUS
SWITCHING
Primary
up
enabled
(current)
Alternate
up
enabled
10.04.20
pkg1 up on node 1
and ready to swtich
to node 2 if node 1
fails
pkg1 is member of
subnet
131.115.154.0 and
this subnet is up.
NAME
primary
Network_Parameters:
INTERFACE
STATUS
PRIMARY
up
PRIMARY
up
STANDBY
up
STATUS
up
NODE
primary
STATE
running
Script_Parameters:
ITEM
STATUS MAX_RESTARTS
Subnet
up
USIT
node 1 has 3
network: one
primary data one
heartbeat and one
standby
NODE
primary
NAME
131.115.154.0
The same
informattion for
primary and the
packages running
on this node.
NAME
primary
primary
Side 2117
ServiceGuard Implementation
Engineering Services
Troubleshooting:
Logfiles:
MC/ServiceGuard uses /var/adm/syslog/syslog.log as the global logfile. Here are all the cluster specific
messages logged, such as cluster start/stop.
Package log:
Each package has its own log file. When the package starts or stops the logfile at the node where then
package start or stops is updated.
file: /etc/cmcluster/pkg1/control.sh.log
(extract from log at package start)
Cluster verification:
Use cmscancl to get a full report of the cluster:
file: /tmp/scancl.out:
.
.
.
USIT
10.04.20
Side 2217
ServiceGuard Implementation
Engineering Services
4. File lists:
Cluster configuration:
file: /etc/cmcluster/cmclconf.ascii
Package configuration:
file: /etc/cmcluster/pkg1/pkg1.conf
Package control script:
file: /etc/cmcluster/pkg1/pkg1.cntl
USIT
10.04.20
Side 2317
ServiceGuard Implementation
Engineering Services
5. MC/ServiceGuard Commands
cmapplyconf
man page description
Verify and apply MC/ServiceGuard and
MC/LockManager cluster configuration and
package configuration files.
cmapplyconf verifies the cluster configuration and
package configuration specified in the
cluster_ascii_file and the associated
pkg_ascii_file(s) , creates or updates the binary
configuration file, called cmclconfig, and
distributes it to all nodes. This binary
configuration file contains the cluster
configuration information as well as package
configuration information for all packages
specified. This file, which is used by the cluster
daemons to manage the entire cluster and package
environment, is kept in the /etc/cmcluster
directory.
If changes to either the cluster configuration or to
any of the package configuration files are needed,
first update the appropriate ASCII ile(s) (cluster
or package), then validate the changes using the
cmcheckconf command and then use cmapplyconf
again to verify and redistribute the binary file to
all nodes. The cluster and package configuration
can be modified whether the cluster is up or down,
although some configuration requires either the
cluster or the package be halted. Please refer to
the manual for more detail. The cluster ASCII file
only needs to be specified if configuring the cluster
for he first time, or if adding or deleting nodes to
the cluster. The package ASCII file only needs to
be specified if the package is being added, or if the
package configuration is being modified.
cmapplyconf
(continued)
USIT
10.04.20
It is recommended that the user run the cmgetconf
command to get either the cluster ASCII
configuration file or package ASCII configuration
file whenever changes to the existing configuration
are required.
Note that cmapplyconf will verify and distribute
cluster configuration or package files. It will not
cause the cluster daemon to start or removed from
Side 2417
ServiceGuard Implementation
Engineering Services
the cluster configuration. The same kind of
processing will apply to the package configuration
to determine whether to add or delete package
nodes, package subnet, etc. Not all package
configuration changes require the package to be
halted.
USIT
10.04.20
Side 2517
ServiceGuard Implementation
cmcheckconf
cmdeleteconf
USIT
10.04.20
Engineering Services
Check high availability cluster configuration
and/or package
configuration files.
cmcheckconf verifies the cluster configuration as
specified by the cluster_ascii_file and/or the
package configuration files specified by each
pkg_ascii_file in the command. If the cluster has
already been configured previously, the
cmcheckconf command will compare the
configuration in the cluster_ascii_file against the
previously configuration information stored in the
binary configuration file and validates the
changes. The same rules apply to the
pkg_ascii_file. It is not necessary to halt either the
cluster or any of the packages to run the
cmcheckconf command.
Delete either the cluster or the package
configuration.
cmdeleteconf deletes either the entire cluster
configuration, including all its packages, or only
the specified package configuration. If neither
cluster_name nor package_name is specified,
cmdeleteconf will delete the local cluster’s
configuration and all its packages. If only the
package_name is specified, the configuration of
package_name in the local cluster is deleted. If
both cluster_name and package_name are
specified, the package must be configured in the
cluster_name, and only the package
package_name will be deleted. The local cluster is
the cluster that the node running the cmdeleteconf
command belongs to.
Side 2617
ServiceGuard Implementation
cmgetconf
cmhaltcl
cmhaltnode
cmhaltpkg
USIT
10.04.20
Engineering Services
Get cluster or package configuration information.
Cmgetconf obtains either the cluster
configuration, not including the package
configuration, or the specified package’s
configuration information, and writes to either the
output_filename file, or to stdout. This command
can be run whether the cluster is up or down. If
neither cluster_name nor package_name is
specified, cmgetconf will obtain the local cluster’s
configuration. If both cluster_name and
package_name are specified, the package must be
configured in the cluster_name, and only the
package configuration for package_name will be
written to output_filename or to stdout.
Halt a high availability cluster.
cmhaltcl causes all nodes in a configured cluster to
stop their cluster daemons, optionally halting all
packages or applications in the process.
This command will halt all the daemons on all
currently running systems. If the user only wants
to shutdown a subset of daemons, the cmhaltnode
command should be used instead.
Halt a node in a high availability cluster.
cmhaltnode causes a node to halt its cluster
daemon and remove itself from the existing
cluster.
When cmhaltnode is run on a node, the cluster
daemon is halted and, optionally, all packages that
were running on that node are moved to other
nodes if possible.
Halt a high availability package.
cmhaltpkg performs a manual halt of high
availability package(s) running on
MC/ServiceGuard or MC/LockManager clusters.
This command may be run on any node within the
cluster and may operate on any package within
the cluster.
Side 2717
ServiceGuard Implementation
cmmodpkg
cmquerycl
USIT
10.04.20
Engineering Services
Enable or disable switching attributes for a high
availability package.
cmmodpkg enables or disables the ability of a
package to switch to another node upon failure of
the package, and it enables or disables a particular
node from running specific packages.Switching for
a package can be enabled or disabled globally. For
example, if a globally disabled package fails, it will
not switch to any other node, and if a globally
enabled package fails, it will attempt to switch to
the first available node on which it is configured to
run.
Query cluster or node configuration information.
cmquerycl searches all specified nodes for cluster
configuration and Logical Volume Manager (LVM)
information. Cluster configuration information
includes network information such as LAN
interface, IP addresses, bridged networks and
possible heartbeat networks. LVM information
includes volume group (VG) interconnection and
file systemmount point information. This
command should be run as the first step in
preparing for cluster configuration. It may also be
used as a troubleshooting tool to identify the
current configuration of a cluster.
Side 2817
ServiceGuard Implementation
cmruncl
cmrunnode
cmrunpkg
cmscancl
USIT
10.04.20
Engineering Services
Run a high availability cluster.
cmruncl causes all nodes in a configured cluster or
all nodes specified to start their cluster daemons
and form a new cluster.This command should only
be run when the cluster is not active on any of the
configured nodes. If a cluster is already running
on a subset of the nodes, the cmrunnode command
should be used to start the remaining nodes and
force them to join the existing cluster.
Run a node in a high availability cluster.
cmrunnode causes a node to start its cluster
daemon to join the existing cluster.
Starting a node will not cause any active packages
to be moved to the new node. However, if a
package is DOWN, has its switching enabled, and
is able to run on the new node, that package will
automatically run there.
Run a high availability package.
cmrunpkg runs a high availability package(s) that
was previously halted.This command may be run
on any node within the cluster and may operate on
any package within the cluster. If a node is not
specified, the node on which the command is run
will be used. This will result in an error if the
current node is not able to run the package or is
not in the list of possible owners of the package.
When a package is started on a new node, the
package’s run script is executed.
Gather system configuration information from
nodes with MC/ServiceGuard or MC/LockManager
installed.
cmscancl is a configuration report and diagnostic
tool which gathers system software and hardware
configuration information from a list of nodes, or
from all the nodes in a cluster. The information
that this command displays includes LAN device
configuration, network status and interfaces, file
systems, LVM configuration, link-level
connectivity, and the data from the binary cluster
configuration file. This command can be used as a
troubleshooting tool or as a data collection tool.
man page description
Side 2917
ServiceGuard Implementation
cmviewcl
cmviewconf
USIT
10.04.20
Engineering Services
View information about the current high
availability cluster.
cmviewcl displays the current status information
of a cluster. Output can be displayed for the whole
cluster or it may be limited to particular nodes or
packages.
View MC/ServiceGuard or MC/LockManager
cluster configuration information.
cmviewconf collects and displays the cluster
configuration information, in ASCII format, from
the binary configuration file for an existing
cluster. Optionally, the output can be written to a
file. This command can be used as a
troubleshooting tool to identify the configuration of
a cluster.
Side 3017
ServiceGuard Implementation
Appendix A
Engineering Services
Konfigurering av FC60 diskarray
A.1 Fysisk konfigurasjon
Følgende figur viser hvordan FC60 diskarrayet er fysisk konfigurert. Figuren inneholder oppdatert
informasjon om hvilke diskposisjoner som er tatt i bruk. For informasjon om hvordan arrayet skal
konfigureres optimalt ved videre vekst, se manualen HP SureStore E Disk Array FC60 User’s Guide , særlig
kapitlet for ”Array Planning”.
Legg spesielt merke til at de 6 disk-enclosures er koblet til hver sin SCSI-kontroller. Både for redundans og
for ytelse gjelder det å spre nye disk-mekanismer ut over maksimalt antall SCSI-kanaler.
FC60 - High Capacity
GB
Controller Module A
Fans
PS
Controller Module B
PS
SCSI
SCSI
SCSI
SCSI
SCSI
Bus Control Card A
Fan
SCSI
SCSI
SCSI
Fan
PS 1:0 1:8 1:1 1:9 1:2 1:10 1:3 1:11 1:4 1:12 PS
SCSI
SCSI
Bus Control Card B
SCSI
Bus Control Card A
Fan
SCSI
Fan
PS 2:0 2:8 2:1 2:9 2:2 2:10 2:3 2:11 2:4 2:12 PS
SCSI
SCSI
Bus Control Card B
SCSI
Bus Control Card A
Fan
SCSI
Fan
PS 3:0 3:8 3:1 3:9 3:2 3:10 3:3 3:11 3:4 3:12 PS
SCSI
SCSI
Bus Control Card B
SCSI
Bus Control Card A
Fan
SCSI
Fan
PS 4:0 4:8 4:1 4:9 4:2 4:10 4:3 4:11 4:4 4:12 PS
SCSI
SCSI
Bus Control Card B
SCSI
Bus Control Card A
Fan
SCSI
Fan
PS 5:0 5:8 5:1 5:9 5:2 5:10 5:3 5:11 5:4 5:12 PS
SCSI
SCSI
Bus Control Card B
SCSI
Fan
Bus Control Card A
SCSI
Fan
PS 6:0 6:8 6:1 6:9 6:2 6:10 6:3 6:11 6:4 6:12 PS
SCSI
SCSI
Bus Control Card B
USIT
10.04.20
Disk modul
Ledig slot
Side 3117
ServiceGuard Implementation
Engineering Services
Fig A-1 Plassering av disker i FC60
A.2 Partisjonering av FC60 Diskarray.
RAID-5 mappet til
0:0
RAID-5 mappet til
0:1
RAID-5 mappet til
0:2
RAID-5 mappet til
0:3
RAID-5 mappet til
0:5
Følgende figur viser hvordan FC60 er konfigurert med partisjoner (LUNs). Figuren inneholder oppdatert
informasjon om hvilke disker som er brukt i RAID-grupper. Manualen HP SureStore E Disk Array FC60
User’s Guide , gir anbefalninger for konfigurering av LUNs
Fysisk plassering av disker i slots, er identisk med figur i A.1
36GB
36GB
36GB
36GB
36GB
Ledig
Ledig
Ledig
Ledig
36GB
36GB
36GB
36GB
36GB
Ledig
Ledig
Ledig
Ledig
36GB
Global
Hot
Spare
Ledig
36GB
36GB
36GB
36GB
36GB
Ledig
Ledig
Ledig
Ledig
Ledig
36GB
36GB
36GB
36GB
36GB
Ledig
Ledig
Ledig
Ledig
Ledig
36GB
36GB
36GB
36GB
36GB
Ledig
Ledig
Ledig
Ledig
Ledig
36GB
36GB
36GB
36GB
36GB
Ledig
Ledig
Ledig
Ledig
Ledig
Disk Enclosure 1
Disk Enclosure 2
Disk Enclosure 3
Disk Enclosure 4
Disk Enclosure 5
Disk Enclosure 6
-Viser hvilke diskmekanismer som er konfigurert sammen til en
LUN, og hvilken adresse denne er gitt.
USIT
10.04.20
Side 3217
ServiceGuard Implementation
Engineering Services
A.3 FC60 status
Oppdatert status for FC60 kan hentes fra systemet vha Support Tool Manager. Dette kan gjøres
interaktivt vha xstm (X-basert GUI) eller vha script. Følgende script, kjørt fra fhume, vil gi status for
arrayet:
/usr/sbin/cstm 2>&1 << !
sel path 0/4/0/0.8.0.255.0.4
info;wait
infolog
view
done
!
Følgende er status pr. 03.01.01 :
=-+-=-+-=-+-=-+-=-+-=-+-=-+-=-+-=-+-=-+-=-+-=-+-=-+-=-+-=-+-=-+-=-+-=-+-=-+-=
-- Information Tool Log for A5277A Array on path 0/4/0/0.8.0.255.0.4 -Log creation time: Wed Jan
3 10:03:08 2001
Hardware path: 0/4/0/0.8.0.255.0.4
Look at the bottom of this file for a summary of warning and/or error messages.
CONTROLLER ENCLOSURE:
=====================
Array Serial Number:
Front Fan Module:
Power Supply 1:
Battery :
000600A0B8090A38
Optimal
Optimal
Optimal
Array Alias:
Back Fan Module:
Power Supply 2:
Temp Sensor:
Optimal
Optimal
Optimal
Controller:
A
State:
Optimal
Model Number:
A5277A
Firmware Revision:
03.01.03.03
Cache Memory/Block Size: 256 mb/
4kb
HP Revision:
HP03
NVSRAM Checksum:
0x343C86DF
Mode:
Active
Serial Number:
1T02310494
Bootware Revision: 03.01.03.02
LoopID/ALPA:
5 /0xE0
Battery Age (90 day increments): 2
Controller:
B
State:
Optimal
Model Number:
A5277A
Firmware Revision:
03.01.03.03
Cache Memory/Block Size: 256 mb/
4kb
HP Revision:
HP03
NVSRAM Checksum:
0x343C86DF
(reporting controller)
Mode:
Active
Serial Number:
1T91810179
Bootware Revision: 03.01.03.02
LoopID/ALPA:
4 /0xE1
Battery Age (90 day increments): 2
DISK ENCLOSURE(S):
==================
Disk Enclosure 0
Controller Card:
Serial Number:
Controller Card:
Serial Number:
Disk Enclosure 1
Controller Card:
Serial Number:
Controller Card:
Serial Number:
Disk Enclosure 2
Controller Card:
Serial Number:
Controller Card:
Serial Number:
Disk Enclosure 3
Controller Card:
Serial Number:
Controller Card:
Serial Number:
Disk Enclosure 4
Controller Card:
USIT
10.04.20
A
USSA04030572
B
USSA04030569
Firmware Revision: HP04
Firmware Revision: HP04
A
USSA04030567
B
USSA04030517
Firmware Revision: HP04
A
USSA04030560
B
USSA04030636
Firmware Revision: HP04
A
USSA04038340
B
USSB01043870
Firmware Revision: HP04
Firmware Revision: HP04
Firmware Revision: HP04
Firmware Revision: HP04
A
Side 3317
ServiceGuard Implementation
Serial Number:
Controller Card:
Serial Number:
Disk Enclosure 5
Controller Card:
Serial Number:
Controller Card:
Serial Number:
Disk Enclosure
A Status:
Fan A:
Power Supply
Temp Sensor:
Disk Enclosure
A Status:
Fan A:
Power Supply
Temp Sensor:
Disk Enclosure
A Status:
Fan A:
Power Supply
Temp Sensor:
Disk Enclosure
A Status:
Fan A:
Power Supply
Temp Sensor:
Disk Enclosure
A Status:
Fan A:
Power Supply
Temp Sensor:
Disk Enclosure
A Status:
Fan A:
Power Supply
Temp Sensor:
0
A:
1
A:
2
A:
3
A:
4
A:
5
A:
USSA04038647
B
USSB01044076
A
USSA04038563
B
USSB01043954
Engineering Services
Firmware Revision: HP04
Firmware Revision: HP04
Firmware Revision: HP04
Firmware Revision: HP04
Optimal
Optimal
Optimal
Optimal
B Status:
Fan B:
Power Supply B:
Optimal
Optimal
Optimal
Optimal
Optimal
Optimal
Optimal
B Status:
Fan B:
Power Supply B:
Optimal
Optimal
Optimal
Optimal
Optimal
Optimal
Optimal
B Status:
Fan B:
Power Supply B:
Optimal
Optimal
Optimal
Optimal
Optimal
Optimal
Optimal
B Status:
Fan B:
Power Supply B:
Optimal
Optimal
Optimal
Optimal
Optimal
Optimal
Optimal
B Status:
Fan B:
Power Supply B:
Optimal
Optimal
Optimal
Optimal
Optimal
Optimal
Optimal
B Status:
Fan B:
Power Supply B:
Optimal
Optimal
Optimal
MAP:
====
Slot
0
1
2
3
4
5
6
7
8
9
Enc 0
--------------------------------------------------------------------Ch:ID | 1:0 | 1:8 | 1:1 | 1:9 | 1:2 |
|
|
|
| 1:12 |
Status| OPT | OPT | OPT | OPT | OPT |
|
|
|
| OPT |
LUN
| 0 R-5| 3 R-5| 1 R-5| 5 R-5| 2 R-5|
|
|
|
| GHS |
--------------------------------------------------------------------Enc 1
--------------------------------------------------------------------Ch:ID | 2:0 | 2:8 | 2:1 | 2:9 | 2:2 |
|
|
|
|
|
Status| OPT | OPT | OPT | OPT | OPT |
|
|
|
|
|
LUN
| 0 R-5| 3 R-5| 1 R-5| 5 R-5| 2 R-5|
|
|
|
|
|
--------------------------------------------------------------------Enc 2
--------------------------------------------------------------------Ch:ID | 3:0 | 3:8 | 3:1 | 3:9 | 3:2 |
|
|
|
|
|
Status| OPT | OPT | OPT | OPT | OPT |
|
|
|
|
|
LUN
| 0 R-5| 3 R-5| 1 R-5| 5 R-5| 2 R-5|
|
|
|
|
|
--------------------------------------------------------------------Enc 3
--------------------------------------------------------------------Ch:ID | 4:0 | 4:8 | 4:1 | 4:9 | 4:2 |
|
|
|
|
|
Status| OPT | OPT | OPT | OPT | OPT |
|
|
|
|
|
LUN
| 0 R-5| 3 R-5| 1 R-5| 5 R-5| 2 R-5|
|
|
|
|
|
--------------------------------------------------------------------Enc 4
--------------------------------------------------------------------Ch:ID | 5:0 | 5:8 | 5:1 | 5:9 | 5:2 |
|
|
|
|
|
Status| OPT | OPT | OPT | OPT | OPT |
|
|
|
|
|
LUN
| 0 R-5| 3 R-5| 1 R-5| 5 R-5| 2 R-5|
|
|
|
|
|
--------------------------------------------------------------------Enc 5
--------------------------------------------------------------------Ch:ID | 6:0 | 6:8 | 6:1 | 6:9 | 6:2 |
|
|
|
|
|
Status| OPT | OPT | OPT | OPT | OPT |
|
|
|
|
|
LUN
| 0 R-5| 3 R-5| 1 R-5| 5 R-5| 2 R-5|
|
|
|
|
|
--------------------------------------------------------------------CONFIGURATION:
==============
Owning
LUN Cntlr
Status
--- ------ --------------0
A
Optimal
1
A
Optimal
USIT
10.04.20
Type
----R-5
R-5
Approximate
Capacity
----------169.4gb
169.4gb
Seg.Size
(Kbytes)
-------16
16
Number
of Disks
-------6
6
Rebuild
Percent
-------
Side 3417
ServiceGuard Implementation
2
3
5
A
B
B
Optimal
Optimal
Optimal
CACHE INFORMATION:
==================
LUN
WCE RCD CME
----- --- --0
X
X
1
X
X
2
X
X
3
X
X
5
X
X
DISKS:
======
Enc C:ID Status
---- ---- --------LUN 0:
0
1:0 Optimal
1
2:0 Optimal
2
3:0 Optimal
3
4:0 Optimal
4
5:0 Optimal
5
6:0 Optimal
LUN 1:
0
1:1 Optimal
1
2:1 Optimal
2
3:1 Optimal
3
4:1 Optimal
4
5:1 Optimal
5
6:1 Optimal
LUN 2:
0
1:2 Optimal
1
2:2 Optimal
2
3:2 Optimal
3
4:2 Optimal
4
5:2 Optimal
5
6:2 Optimal
LUN 3:
0
1:8 Optimal
1
2:8 Optimal
2
3:8 Optimal
3
4:8 Optimal
4
5:8 Optimal
5
6:8 Optimal
LUN 5:
0
1:9 Optimal
1
2:9 Optimal
2
3:9 Optimal
3
4:9 Optimal
5
6:9 Optimal
4
5:9 Optimal
Hot Spares:
0
1:12 Optimal
Unassigned:
none
R-5
R-5
R-5
CWOB
----
WCA
--X
X
X
X
X
Engineering Services
169.4gb
169.4gb
169.4gb
RCA
--X
X
X
X
X
16
16
16
6
6
6
CMA
--X
X
X
X
X
Vendor
Product ID
Rev
-------- ---------------- ----
Serial Number
Aprx.Cap
---------------- --------
SEAGATE
SEAGATE
SEAGATE
SEAGATE
SEAGATE
SEAGATE
ST336704LC
ST336704LC
ST336704LC
ST336704LC
ST336704LC
ST336704LC
HP01
HP01
HP01
HP01
HP01
HP01
3CD07391
3CD0741A
3CD06VBQ
3CD075HC
3CD06JX8
3CD06TGG
33.9gb
33.9gb
33.9gb
33.9gb
33.9gb
33.9gb
SEAGATE
SEAGATE
SEAGATE
SEAGATE
SEAGATE
SEAGATE
ST336704LC
ST336704LC
ST336704LC
ST336704LC
ST336704LC
ST336704LC
HP01
HP01
HP01
HP01
HP01
HP01
3CD0730F
3CD06ZV9
3CD073TW
3CD06WAK
3CD0709F
3CD0715T
33.9gb
33.9gb
33.9gb
33.9gb
33.9gb
33.9gb
SEAGATE
SEAGATE
SEAGATE
SEAGATE
SEAGATE
SEAGATE
ST336704LC
ST336704LC
ST336704LC
ST336704LC
ST336704LC
ST336704LC
HP01
HP01
HP01
HP01
HP01
HP01
3CD06J4N
3CD06R7H
3CD06NPP
3CD0752S
3CD073V0
3CD069CA
33.9gb
33.9gb
33.9gb
33.9gb
33.9gb
33.9gb
SEAGATE
SEAGATE
SEAGATE
SEAGATE
SEAGATE
SEAGATE
ST336704LC
ST336704LC
ST336704LC
ST336704LC
ST336704LC
ST336704LC
HP01
HP01
HP01
HP01
HP01
HP01
3CD06T1Z
3CD072HB
3CD06PCQ
3CD073NQ
3CD06XEH
3CD071ZY
33.9gb
33.9gb
33.9gb
33.9gb
33.9gb
33.9gb
SEAGATE
SEAGATE
SEAGATE
SEAGATE
SEAGATE
SEAGATE
ST336704LC
ST336704LC
ST336704LC
ST336704LC
ST336704LC
ST336704LC
HP01
HP01
HP01
HP01
HP01
HP01
3CD05WCM
3CD070YQ
3CD06XJF
3CD072Z8
3CD06QZP
3CD06XLS
33.9gb
33.9gb
33.9gb
33.9gb
33.9gb
33.9gb
SEAGATE
ST336704LC
HP01
3CD0C4PP
33.9gb
SUMMARY:
========
The following warning and/or error message(s) exist:
none
USIT
10.04.20
Side 3517
ServiceGuard Implementation
Appendix B
Engineering Services
Felles volumgrupper
Tabell B-1 Felles volumgrupper for MC/ServiceGuard cluster cluster1
Volumgrupp
e
vgkant01
vgkant02
vgkant03
vghume01
vghume02
USIT
10.04.20
minor
0x050000
0x060000
0x070000
0x080000
0x090000
fkant
Primary Path
/dev/dsk/c9t0d0
/dev/dsk/c10t0d1
/dev/dsk/c9t0d2
/dev/dsk/c10t0d3
/dev/dsk/c9t0d5
Alternate Path
/dev/dsk/c10t0d0
/dev/dsk/c9t0d1
/dev/dsk/c10t0d2
/dev/dsk/c9t0d3
/dev/dsk/c10t0d5
fhume
Primary Path
/dev/dsk/c9t0d0
/dev/dsk/c10t0d1
/dev/dsk/c9t0d2
/dev/dsk/c10t0d3
/dev/dsk/c9t0d5
Alternate Path
/dev/dsk/c10t0d0
/dev/dsk/c9t0d1
/dev/dsk/c10t0d2
/dev/dsk/c9t0d3
/dev/dsk/c10t0d5
Side 3617
ServiceGuard Implementation
USIT
10.04.20
Engineering Services
Side 3717
Appendix C
NFS filsystemer for MC/ServiceGuard pakker kant og hume
kant:
Pakken lokal
Logisk volum
/dev/vgkant01/lvol1
/dev/vgkant01/lvol2
Monteringspunk
t
/uio/kant/fu1
/uio/kant/fu2
Pakken remote
Lokal link for brukere og eksport
/uio/kant/u1  /uio/kant/fu1
/uio/kant/u2  /uio/kant/fu2
Remote filsystem
kant:/uio/kant/fu1
kant:/uio/kant/fu2
Monteringspunkt
/uio/kant/nfsu1
/uio/kant/nfsu2
Lokal link for brukere og eksport
/uio/kant/u1  /uio/kant/nfsu1
/uio/kant/u2  /uio/kant/nfsu2
hume:
Pakken lokal
Logisk volum
/dev/vghume01/lvol1
/dev/vghume01/lvol2
Monteringspunk
t
/uio/hume/fu1
/uio/hume/fu2
Pakken remote
Lokal link for brukere og eksport
/uio/hume/u1  /uio/hume/fu1
/uio/hume/u2  /uio/hume/fu2
Remote filsystem
hume:/uio/hume/fu1
hume:/uio/hume/fu2
Monteringspunkt
/uio/hume/nfsu1
/uio/hume/nfsu2
Lokal link for brukere og eksport
/uio/hume/u1  /uio/hume/nfsu1
/uio/hume/u2  /uio/hume/nfsu2
Appendix D
Konfigurasjonsfiler
Vedlagt følger konfigurasjonsfiler som beskriver clusteret
Index
/etc/cmcluster/cmclconf.ascii
-clusterets konfigurasjonsfil
/etc/cmcluster/kant/kant.conf
/etc/cmcluster/kant/kant.cntl
-pakken kant’s konfigurasjonsfil
-pakken kant’s run-script
/etc/cmcluster/hume/hume.conf
/etc/cmcluster/hume/hume.cntl
-pakken hume’s konfigurasjonsfil
-pakken hume’s run-script
Download