Oracle Rdb LogMiner

advertisement
Charon-VAX in
Rdb Engineering
Norman Lastovica
Oracle Rdb Engineering
Oracle New England Development Center
Norman.Lastovica@oracle.com
1
Problem Overview
• Needed to reduce computer lab footprint
–
Floor space, power & cooling
• Very old hardware maintenance headache
–
–
Unreliable & difficult to repair
VAX computers over 15 years old
Some star Couplers, HSJs, disks even older
2
Problem Overview
• Performance of CPU, memory, Ethernet, disks,
controllers, & busses lag behind Alpha & I64
• Need multiple VAX environments for building,
testing, debugging & support for Rdb product
family
3
Charon-VAX Solution
• Replace approximately 12 large VAX systems (6000 &
7000 class) in several clusters with Charon-VAX
emulators
–
Consolidate/simplify existing systems & clusters
• Migrate primarily to SAN-based storage
• Sub-goal of improved performance & reliability for
users (at least no reduction)
4
Remove VAX Hardware?
“Oracle Corporation supports VAX versions of Oracle Rdb and Oracle
CODASYL DBMS and their related products running on
CHARON-VAX provided that any problems reported can be
reproduced by Oracle Support on an actual VAX.”
“HP Services supports HP OpenVMS software on the CHARON-VAX
and CHARON-AXP emulators running on HP systems only.
Existing software service contracts are valid on supported
OpenVMS VAX and OpenVMS Alpha AXP applications running on
the appropriate emulator. HP fixes software problems if they are
also seen in the comparable VAX or Alpha AXP environment.”
5
Extensive Testing
• Before announcing support for Rdb on CharonVAX
–
–
–
6
Extensive Rdb & DBMS Regression tests
Various Performance tests
Consultations with HP & SRI
Performance
7
Prime Number Generation
•
•
•
•
20
15
10
–
5
–
0
Seconds
VAX 6650
8
C program from Internet
Single-user
CPU intensive
Dell laptop host system
Charon-VAX
Single 2GHz Intel CPU
…at 35,000 feet
More Prime Number
Generation
•
•
•
•
400
350
300
250
200
150
100
50
0
–
Seconds
VAX 6650
9
C program from Internet
Single-user
CPU intensive
HP BL25p host
Charon 6630
Dual 2.6GHz dual-core AMD
Random Floating Point
Additions
• Update random floating
point numbers in 1MB
global section
• Single-user
• HP BL25p host
80
70
60
50
40
30
20
10
0
–
Seconds
VAX 6650
10
Charon 6630
Dual 2.6GHz dual-core AMD
Lock Request Latencies
Local & Remote
1552
microseconds
2000
58
1500
1060
Hardware 6650
1000
500
Charon-6630
12
0
Charon-6630
Hardware 6650
•HP BL25p host
–Dual
11
2.6GHz dual-core AMD
DBMS Regression Test
• Sun V65 host
70
60
–
50
–
40
30
20
10
0
CPU
VAX 6550
12
Elapsed
Charon-VAX
Dual 3.06GHz Intel with HT
MSCP served disks via Ethernet
Rdb Database Populate
• VAX 6650
20
–
• HP DL 585 host
15
–
10
–
5
0
Seconds
VAX 6650
13
HSJ storage
Charon-VAX
Quad 2.4GHz AMD
Single IDE disk
• Single user store data
into database
• Average for 100 txn
Single User OLTP
•
•
•
•
2
1.5
1
–
0.5
HSJ storage
• HP DL 585 host
0
–
Seconds
VAX 6650
14
Single user
Random DB update
Average for 1,000 txn
VAX 6650
Charon-VAX
–
Quad 2.4GHz AMD
Single IDE disk
Synchronous
Random 5-block IO
• $IOT /COUNT=2000 /QUE=1
/SIZE=5 SYS$SYSDEVICE
350
300
• CI HSJ40 on VAX 6650
• Fibre EVA3000 on
Charon 6630
250
200
150
100
50
0
IO/Sec
VAX 6650
15
Charon 6630
Queue of
Random 5-block IO
• $IOT /COUNT=2000 /QUE=8
/SIZE=5 SYS$SYSDEVICE
400
350
• CI HSJ40 on VAX 6650
• Fibre EVA3000 on
Charon 6630
300
250
200
150
100
50
0
IO/Sec
VAX 6650
16
Charon 6630
Queue of
Random 5-block IO
• $IOT /COUNT=7500 /QUE=8
/SIZE=5 RDB$TEST_SYS1:
10000
• Fibre EVA3000 on
Charon 6630
• Software Raid set of 10
disks on CI HSJ40s on
VAX 6650
8000
6000
4000
2000
0
IO/Sec
VAX 6650
17
Charon 6630
Create and Sort File of
Random Records
1,000,000 records / 256,167 blocks
CPU Time
Elapsed Time
03:36.0
03:36.0
02:52.8
02:52.8
02:09.6
02:09.6
01:26.4
01:26.4
00:43.2
00:43.2
00:00.0
00:00.0
Create
VAX 6650
18
Sort
Charon 6630
Create
VAX 6650
Sort
Charon 6630
What Host System To Pick?
19
HP ProLiant BL25p
Server Blade
•
•
•
•
•
•
•
20
1.7in (4.3cm) x 10.3in (26.2cm) x 28in (71cm)
21 lb (9.5 kg)
Two Dual-Core AMD Opteron™ (2.6 GHz)
6GB PC3200 DDR SDRAM at 400 MHz
4 Gigabit NIC ports
Dual Port 2-Gb Fibre Channel Adapter
Internal HP 36GB U320 15K disk
Why BL25p?
• Two dual-core processors = 4 effective CPUs
–
–
Run CHARON-VAX/6630 Plus for Windows
Windows Server 2003 Standard Edition
Only 4GB memory of our 6GB usable due to limit in Standard
Edition (larger limits in “higher” Editions) – whoops
• More cost & space effective than 4p DL585
–
21
Very near same peak performance (2.6GHz dual-core vs.
2.8GHz single-core)
Why BL25p?
• Up to 8 BL25p servers in single 10.5” tall enclosure
–
Using existing rack space = no additional floor space
• Remote management capable
–
Software KVM switch console / ILO
• Alternately: BL35p – 2 NIC ports, two 2.4GHz dualcore, 5k or 10k internal disks - Same price per server Up to 16 per enclosure
22
Blade Enclosure
10.5 inches high
19 inch rack mount
8 BL25p or 16 BL35p
23
How to Deploy?
24
Phased Implementation
Plan
1. Replace 3 test clusters (total of 8 nodes) with
single 2 node cluster
•
•
Install and test and experiment with new hardware and then
migrate workload and shutdown old systems
Work out installation and configuration issues to avoid
impacting development cluster or test environments
2. Replace 3 VAX nodes in development cluster
25
Best of Intentions
• Multiple UPS failures in a single day
• 2 VAX systems in development cluster suffer
serious hardware damage – multiple power
supplies failed
• Leads to Accelerated Charon-VAX deployment
26
Original Development
Cluster Configuration
2 I64 rx4640 V8.2-1
2 Alpha V8.2
2 VAX 6650 V7.3
Ethernets
DECnet,
TCP/IP,
SCS
2 CI Rails
2Gb San
EVA5000
27
2 Star Couplers
HSJ40s
New Development Cluster
Configuration
2 I64 rx4640
2 Alpha
3 BL25p / Charon-6630
Ethernets
2Gb San
EVA5000
28
DECnet,
TCP/IP,
SCS
Test Cluster Configuration
2 BL25p / Charon-6630
Ethernets
2Gb San
EVA5000
29
DECnet,
TCP/IP,
SCS
Host Detail Configuration
6 BL25p / Charon-6630
Local windows
system disk & page
/ swap container file
per host for VMS
EVA5000
30
•Shared “DUA” disks per cluster
•VMS system & data disks
•“Raw” LUNs on SAN presented to
Windows
VAX Disks on SAN
• Charon presents raw SAN LUN as MSCP DUA device
• VAX/VMS sees it as “DUAx:” just like from HSJ
• If needed - must be MSCP served from VAX to other
Alpha/I64 nodes –can not access LUN directly because
it appears as “DGA” device
• Multiple Charon-VAX systems in cluster access same
SAN LUN with same DUA name
31
Memory Configuration
• Various 128MB, 256MB & 512MB on 76x0,
66x0, 65x0, 64x0 test and development systems
• 1GB on our Charon-6630
–
Can be increased to 2GB with enough host system memory
• Perhaps allow VMS processes larger caches
and/or working sets to reduce paging & IO
32
Disk Configuration
• Local host Windows system disk
–
Could alternately have been on SAN
• VAX system disk on EVA5000 disk unit shared
between multiple Charon hosts in Cluster
• VAX system page/swap disk is container file on
local host disk
33
Disk Performance
• Access to VAX system disk on SAN roughly 2 to 50
times faster than CI based HSJ storage
• MSCP served disks from Alpha about equal from
Charon-6630 (via NI) as hardware VAX (via CI)
• Once new configuration proves reliable, CI-related
hardware will be retired
34
System Performance
• Single Charon-6630 roughly 3 times faster than
hardware 6650
–
On our host, Charon-66x0 CPU runs around 3 to 6 times
faster than hardware 66x0 CPU
• Less CPUs should result in less inter-processor
contention (ie, MPSYNCH time)
35
Application Performance
• First Rdb regression test suite run time reduced
from about 12 hours to about 8 hours – 33%
increase
36
Relatively Simple
Deployment
• Install & configure Windows Server 2003
Standard Edition
–
–
–
Install windows updates / patches, anti-virus, Charon
Disable documented windows services
Configure prototype Charon-VAX emulator template files
• Replicate Windows disk to other hosts
37
Relatively Simple
Deployment
•
Create VAX disk LUNs on SAN
1.
2.
3.
4.
5.
6.
38
Make visible to existing Alpha or I64 system in cluster
BACKUP/IMAGE existing VAX disks to new LUNs
Make LUNs visible to PC host servers
Shutdown VAX systems cluster-wide, dismount CI disks on
remaining nodes
Start Charon-VAX and boot from LUN on multiple hosts
Mount new disks (served via MSCP to cluster from
Charon-VAX nodes) on remaining nodes
Watch Out For…
• Read all documentation before you embark
–
Charon-VAX, Windows Server 2003
• Difficult to work with raw LUNs from SAN to
Windows for VMS cluster disks
• Disk unit numbers presented to cluster & disk
allocation classes
39
Watch Out For…
• Boot options & saving in ROM file
• Windows Server 2003 Standard Edition limits
4p & 4GB
–
Other “editions” offer higher limits
• Users concerned that things may be broken
because they run so fast!
40
Questions & Comments?
41
Download