Solid State Storage in Oracle Environments

advertisement
Solid State Storage in Oracle
Environments
Mark Henderson & Rick Stehno
CAUTION: We successfully completed
all of these system and storage modifications
in our lab to perform our benchmark tests.
Before implementing any of these
modifications in your environment, be sure to
test them completely to determine if they
should be used in your environment.
LSI Overview
Company Highlights
 Focused on Storage and Networking
 12,000+ Patents and Patent Applications
 $2.5B Annual Revenue
 Global Presence, 3000+ Employees
 300,000+ Storage Systems Deployed
IP
3%
Networking
19%
Storage
Systems
34%
LSI-Oracle Partnership
 12 years of Successful Partnership Spanning
Silicon, Boards and Storage Systems
 Technology and Manufacturing Partner for the
Oracle 2500 & 6000 Storage Systems
 Designed and Tested for Interoperability with
Oracle Operating Systems and Applications
3
Storage
Semiconductors
44%
Who we are:
• Rick Stehno is an Oracle Technologist/DBA with LSI Corporation
which designs and manufactures high performance storage systems.
Rick works with Oracle and LSI's various OEM's to create and promote
solutions using LSI's storage systems with the various Oracle
technologies. Rick has been in the IT field for over 34 years and
working with Oracle databases since 1989.
• Mark Henderson is a Technical Marketing Manager with LSI
Corporation which designs and manufactures high-performance
midrange storage systems for major OEMs. Mark works with Oracle
and LSI’s various channels to create and promote solutions that
address customer problems and create competitive advantage. He
has a degree in Computer Systems Engineering, has designed highend flight simulators, participated in computer science and networking
research at US DOE labs, architected HPC centers, and has been
involved with Storage Resellers, Fibre Channel Director SAN
technology and MAID storage systems.
4
Solid State Storage Comes in Many Forms
• Is delivered to the market in three basic forms
• Server Cards – think memory expansion
• Network device – the most well known is the Oracle 5100
• Solid State Disk – which are installed in either Servers or RAID
systems
– SSD installed in servers has many of the same properties of server cards.
5
Solid State Technology (And do you really care?)
• Internally they are similar to a bunch of your average jump drives
• Solid State Storage is a consumable resource – but don’t panic!
– It wears out, not unlike regular old rotating disk drives
– Bad blocks on drives, remapped sectors
• There are two technologies that you may hear about
– MLC Multi-Level cell
– SLC Single-Level cell
• The technology is simply a discussion of *cost*, not price.
6
OK so now that I have this super fast device – what
does that mean? The obvious, well isn’t…
•
•
•
•
•
•
It’s all equally accessible – no short stroking
While it doesn’t rotate, mixed reads and writes do slow it down
Scanning the Device for bad sectors is a thing of the past
It may not be necessary to stripe for performance
In cache cases you might not even need to mirror SSDs
Using Smart Flash Cache AND moving data objects to SSD decreased
performance
• Online Redo Logs are best handled by HDD because of the sequential
writes
7
Look at the Solid State price per GB!!!
($/IOP vs. $/GB)
$
IOPS
GB
8
Oracle Smart Flash Cache & Database Stroage Tiering
Smart Flash Cache
• Technology available in 11gR2 + a patch
• Extends Oracle Buffer Cache
• Can use any technology
– Flash Cards, Network Flash, Solid State Disk, even USB drives
• Point Oracle at the flash resource and it’s all automatic
• Least interaction between storage admin and DBA
Database Storage Tiering
• Tiered storage often uses Flash as a “Tier 0” layer
• Can be higher performance AND less expensive
• Mix and Match Multi-technology solutions
• Storage Arrays can hold multiple Tiers
– Some do so with more grace than others
• Use Database Partitioning to drive Storage Tiering
9
Where should you invest in Solid State?
• Where should you invest?
– Server
– Network
– Storage
10
Investing in Solid State in the Server
•
•
•
•
•
•
Lowest latency
Low entry point
Dedicated to specific server
Not transportable
Great for buffer extension / acceleration
Varity of manufactures / sizes / capability
11
Investing in Solid State in the Network
• Most dense flash storage
• Looks like a drive
– Partitionable x4
– Not sharable
12
Investing in Solid State Storage
• Persistent Storage
• Multi-Server Shared Storage
– OVM
– RAC
– VMware
• Data Protection Methods
• Database Partitioning
• Automatic Storage Tiering
13
So where should you invest in solid state?
“Depends…”
Property \ Location
Physical Description
Latency
Entry Price Point
Sharable Data
Partition Sharing
Single Machine (SMP) Fit
Shared Multi-Machine Fit Real Application Clusters (RAC)
Persistent Data (no power)
Data Protection (Snapshot, Volume Copy)
Site Protection (Remote Volume Mirroring)
Automatic Storage Management (ASM) Compatible
Recovery Manager (RMAN) Compatible
Oracle Smart Flash Cache
VMware Shared Storage (HA and Advanced Features)
Server Flash
Card
Lowest
Lowest ($5k)
No
No
Yes
No
No
No
No
Yes
Yes
Yes
No
Network Flash
Device
Low
Medium($50k)
No
Yes
Yes
No
No
No
No
Yes
Yes
Yes
No
14
Storage Flash
Solid State Disk
Low
Low ($15-30k)
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Moving the Bottleneck
• High Performance array controller
– Sustained throughput to the drives
– Not just Cache numbers
• And the rest of the system has to be able to use the faster speed…
Server(s)
Network
FC
Controller
Drives
15
Product Background External
16
Oracle Storage Array SSD Testing Results
Smart Flash Cache
0.2
0.18
0.16
0.14
0.12
0.1
0.08
0.06
0.04
0.02
0
All HDD Baseline
Flash Cache on SSD
Avg Response
Time (sec)
0.2
Response Time (sec)
0.18
1600%
0.16
1400%
0.14
1200%
0.12
Avg Response Time
(sec)
0.1
0.08
Avg Transaction Time
(sec)
0.06
0.04
Avg Transaction
Time (sec)
Percent Improvement
1000%
Flash Cache on SSD
800%
600%
Move Top 9 Objects to
SSD
400%
0.02
200%
0
All HDD
Baseline
Flash Cache
on SSD
Move Top 9
Objects to
SSD
0%
% Response
Gains
% Transaction
Gains
17
SAN Based SSD Testing
• We used an LSI 7900 Storage Array
–
–
–
–
Three Storage Drive Enclosures
(28) 15k RPM Fibre Channel drives in RAID 10 for ASM disk groups
(3) 15k RPM Fibre Channel drives in RAID 10 for Redo logs
(2) 73GB SSD in mirrored RAID for data protection
• Sever: Two Xeon 5150 @ 2.66GHz dual-core
• Oracle Enterprise Linux Release 5.5
18
Database Configuration
•
•
•
•
•
•
SGA=1.5GB
filesystemio_options=async
disk_async_io=TRUE
1GB redo logs
ASM
60GB Oracle Smart Flash Cache
– SQL> alter system set db_flash_cache_file='/u04/flash.dbf‘ scope=spfile;
– SQL> alter system set db_flash_cache_size=60g scope=spfile;
– SQL> show parameter flash
•
•
•
•
NAME
TYPE
VALUE
------------------------------------------------------------------------db_flash_cache_file
string
/u04/flash.dbf
db_flash_cache_size
big integer 60G
19
WarpDrive™ PCIe Solid State Acceleration Card
• Provides scalable SSD performance
inside-the-server
• Designed to supercharge application
performance
– Built for IOPS, throughput and both
random and sequential I/O workloads
– Performance: 240K IOPs, 1.5GB/s,
50usec latency
– Usable capacity 300GB (w/28% overprovisioning)
• No change to OS or applications
• Built for broad OS support
– Bootable
– Including RHEL, SLES, Windows
32/64 support
20
WarpDive Testing Configuration
• HP ProLiant DL370 G6
–
–
–
–
–
Dual Intel Xeon Processor X5570
48GB - 1333 DDR3
LSI 9210-8i SAS host bus adapter
LSI SAS 2x36 Expander
146GB 2.5-in. SFF 6G SAS 10K RPM drives
• Software RAID 0 over 6 LUNs for the UNDO tablespace
• Software RAID 0 over 6 LUNs for the Online REDO Logs
• All tablespaces were striped over 10 individual LUNs when using HDD
21
Database Configuration Single WarpDrive
•
•
•
•
•
•
SGA=16GB
filesystemio_options=async
disk_async_io=TRUE
4GB redo logs
Benchmarks used Swingbench with 100 user load with no latency
250 GB Oracle Smart Flash Cache
• SQL> alter system set db_flash_cache_file='/u05/flash.dbf‘
scope=spfile;
• SQL> alter system set db_flash_cache_size=250g scope=spfile;
• SQL> show parameter flash
•
•
•
•
NAME
TYPE
VALUE
------------------------------------------------------------------------db_flash_cache_file
string
/u05/flash.dbf
db_flash_cache_size
big integer 250G
22
Dual WarpDrives with Oracle ASM
(Database Configuration)
• SQL> alter system set db_flash_cache_file='+DATAWH/flash.dbf'
scope=spfile;
• SQL> alter system set db_flash_cache_size=250g scope=spfile;
• SQL> show parameter flash
NAME
TYPE
VALUE
------------------------------------------------------------------------db_flash_cache_file
string
+DATAWH/flash.dbf
db_flash_cache_size
big integer
250G
23
Oracle Warp Drive Testing Results
TPS
8000
7000
6000
Response Time (ms)
5000
120
4000
100
3000
TPM
400000
2000
80
1000
350000
60
0
300000
Baseline Smart Flash Mirrored
Cache Warpdrives
40
250000
200000
20
150000
0
Baseline
100000
Smart Flash Mirrored
Cache
Warpdrives
50000
0
Baseline
Smart
Flash
Cache
Mirrored
Warpdrives
24
Tools or Procedures to Investigate I/O Activity
Tools available in the database:
• Statspack (Free, since 8i)
• Automatic Workload Repository
(AWR) ‐ Requires license
• Oracle Enterprise Manager ‐ OEM
The database views in specific areas:
• v$filestat
• v$sysstat
• v$system_event
• v$session_wait
• turn on trace events
Operating System level tools:
• For Linux/Unix
– Iostat
– Vmstat
• For Windows
– Performance Monitor using the Oracle
options
25
Review Statspack or AWR Reports
• Instance CPU Section
– Is the system is CPU bound?
• Tablespace I/O Statistics Section
– Which tablespace(s) have the highest I/O activity?
• Segments by physical Reads
– Most active physical reads objects
– Percentage amount of the total Read I/O activity
26
Additional AWR Analysis
• Segments by Physical Writes
– List of the most active database objects based on physical writes and the
percentage amount of total Write I/O activity.
AWR was used to ID the top nine data objects to move
27
LSI Oracle Enterprise Manager Plug-in
• Our Plug-in is intended to assist Database Administrators:
– To understand the storage configuration
– To comprehend performance trends
– View the current storage status
– Plan proactively for capacity needs
28
OEM Plug-in Displays Storage Resources
29
OEM Plug-in shows Database relationship to LUNS
30
OEM Plug-in Performance Graphs
31
OEM Plug-in Storage Array Performance
32
Linux Tuning for Solid State Drives
(both SAN based SSD and WarpDrive)
• Align the SSD on a 4-KB boundary for optimal performance
• Use EXT-2 to bypass filesystem journaling
– eliminates double writes to the SSD
– which increases performance
– prolongs the life of the SSD
• Modify the kernel I/O scheduler to NOOP for the SSD device
• Used the noatime filesystem mount option
– eliminates the need for the system writes to the filesystem when objects are
only being read
33
Linux Tuning Test Observations
• Test results using a 500 user load with just operating system tuning
efforts applied:
Average TPM
1600
TPS
Average Response (ms)
120
1400
90000
80000
70000
60000
50000
40000
30000
20000
10000
0
1200
100
1000
80
800
60
600
40
400
20
200
Before
After
0
Before
After
0
Before (ms) After (ms)
– Overall: 35% performance increase
• These changes not only increased the performance when using a 100user load, they also improved the performance of the higher user
loads.
• System performance did not drop dramatically when using the 500user load
34
Solid State Conclusions and Recommendations
• If you are I/O bound, AND you have CPU cycles
–
–
–
–
–
Take your storage admin out for coffee…
If you aren’t using ASM, consider it
Smart Flash Cache will get you an improvement, IFF you have CPU cycles
Best results using AWR / StatsPak, but it takes some work
Move Data Objects or Smart Flash Cache, not both
• SS in Server, Network or Storage will work, depending on goals
– Shared storage requires a storage system
• A modest SSD investment can provide huge returns
• The LSI 7900 Engenio Storage System and the LSI WarpDrive can
deliver performance using SSD technology to applications such as
Oracle, for balanced performance and cost efficiency.
35
Resources and Contact Information
Material taken from the following white papers:
• Migration of Live Oracle Databases to LSI Storage
• Oracle Storage Tiering within a LSI Engenio 7900
• Where to Invest in Flash in an Oracle Environment
• Practical Application of Solid State Disk (SSD) to an Oracle
Database on LSI Engenio Storage
• Best Practices for Optimizing Oracle® Database Performance
with the LSI™ WarpDrive™ Acceleration Card
Rick.Stehno@lsi.com
Mark.Henderson@lsi.com
36
Download