<Insert Picture Here> The Oracle Storage Story, Now and Tomorrow Chris Wood Director of Product Management, Axiom LOB A simplified History of Sun/Oracle Storage • In 1999 Sun acquired MaxStrat (and me in the process) to get it’s first real RAID array. The famous (infamous) T3. (Project Purple) • Sun was not skilled in acquisitions; Sun managed to “incent” 90% of the MaxStrat development team to quit within the first 90 days. • Leaving a half completed program with no skills to complete it. • About 2 years later the T3 finally shipped with about ½ the features it was supposed to have and little or no RAS. • We sold over 36,000 of them and then it started to break… • Everything hit the fan – I got good at ducking. If at first you don’t succeed… • Buy another company with Magic Technology: Pirus networks (2002) • Pirus had a distributed architecture crossbar virtualization engine that was supposed to do NAS, SAN, offer any to any connectivity, rich data services, fast as all get out and be the industries first totally virtualized array. (6920) • Unfortunately, this did not happen and the remnants were sold off to LSI/Engenio for pennies on the dollar • Sun also bought LSC in 2001 and obtained SAM-QFS but had no clue what to do with it. (The “Box” Mentality) • Now Sun had no competitive storage offerings; Some cool storage S/W but no idea how to exploit it, so what to do? Give up, and OEM something that works • Sun OEM’s “Big Fish” from HDS – The 99xx line • Sun OEM’s “Little Fish” from LSI – The 6xxx line. • Both products work just fine, so Sun decides to develop it’s own UI CAM (Common Array Manager) vs. use the well respected HDS High Command and LSI’s SANtricity. • In keeping with past failures, CAM was universally disliked by customers, slow, buggy and short on function; otherwise a great product. • There was one other problem: It’s very hard to make money on OEM’d products – Especially if you have daily channel conflict with your major vendor HDS. • Now What? Fishworks • Sun finally asks the right question: What exactly is storage? • Disks and Enclosures: Commodity, low value “stuff” • Controller Hardware: Memory (DRAM), microprocessors, I/O ports and a ton of software – A server for instance – We make those! • What Glue is missing? • File System, Volume Manager, RAID code etc. – ZFS – Gosh, we own that also… • Management UI – Fishworks and D-Trace Analytics • Decision: Let’s build the Glue and tie the other parts that we already have together – Thus was ZFSSA born. Today • Oracle has two great storage offerings: Axiom 600 and ZFSSA – Similar but different, more on this later • Oracle has the premier tape offering in the world, hands down • Oracle has re-discovered SAM-QFS – There’s still nothing quite like it in the world. • Oracle has doubled-down on storage and will announce complete product refreshes for both the Axiom and the ZFSSA at Oracle Open World with availability this year. Sneak peak coming. Agenda • TAPE • ZFSSA • AXIOM • SAM-QFS Jim Cates on Tape… His response to my comment: “the SL8500 is really cool” • The T10000C tape drive is also another fairly sophisticated product. The tape is 5 microns thick and we do fast search down tape at about 12 m/s (26 mph). The drive writes ~ 3 micron data tracks on tape at roughly 11 mph to a placement accuracy of tenths of microns. We place 3584 tracks on 1/2 inch tape using 32 data channels operating in parallel. • Oracle designs everything for this drive including the recording head film stack, servo pattern, tape path, rd/wrt channels, compression/encryption algorithm, analog/digital chips, etc. • We arguably control the largest breadth of dimensions associated with a product. The recording heads have films measured in angstroms. The media length is about 1 kilometer. This is a difference of about 13 orders of magnitude. • The rise time of the head and channel is specified in nanoseconds. The media shelf life is 30 years. This difference spans 17 orders of magnitude. • CW Comment: Tape is way cool! But Tape is Dead… • Every analyst says tape backup is declining – And they are correct! • D2D (VTL’s etc) are winning: Faster, Deduplication, Sex Appeal, whatever. • Then along came Government Regulations, Politicians and Lawyers. Thank God! • • • • • Data retention policies Searchable Archives Project “Carnivore” Big Data Mobile Devices • All that stuff, that nobody will (or should¹) read has to go somewhere ¹Privacy? Why Tape: Economics of Tiered Storage • Tape is the Foundation: Most of the Data Stored at the Lowest Cost 5% 15% Current (7-30 days) : Flash/Performance Disk • Frequent changes • Immediate access • Instantaneous protection Recent (30-90 days) : SAS/SATA • Infrequent changes • modifications change classification to “Current” • Slight access delay acceptable Archival (>90 days) : Tape • Very infrequent / no changes 80% • Offsite / offline / nearline protected Result: Tape is growing, Backup is not Analysts: Digital Archive Market Tape is the largest tier Archive Capacity and Revenue: 2009-2017 (EB and $M) • Storage for archive and retention is a $3B Market growing to over $7B in 2017 • Use case for archive storage is becoming distinct from primary or backup use case • Tape is established as primary storage tier for long-term archive retention IDC Market Analysis. Worldwide Archival Storage Solutions Forecast: Archiving Needs Thrive in an Information-Thirsty World. (IDC #230762) Press: Tape “Archive, legislation, need for off-site data storage, disaster recovery, dealing with massive data quantities all mean there is a place for tape. Even as semi-primary storage, tape can have a role to play.” Clive Longbottom in Dave Bailey’s column, Computing “(Tape) gives true offsite vaulting for disaster recovery, and requires much less power and cooling. We believe tape will remain viable for the foreseeable future.” Robert Amatruda, IDC, in Iain Thomson’s column, V3 Oracle StorageTek Tape Portfolio • Best scalability • Best reliability and availability • Best TCO and investment protection SL8500 We Make Entrya lot of Stuff! SL3000 Software Device Management SL150 Data Management Tiered Storage Virtualization LTO T9840 T10000 VLE VSM Encryption Oracle’s Broomfield, Colorado Campus 225,000 square feet, $70M+ Labs, Over 50 ISV partnerships Oracle Tape Portfolio Investment (last 2 years) • More innovation than ever… with more to come! • Increased investment in tape R&D • Our strongest portfolio ever • Innovation from integration of Oracle software with StorageTek hardware Making Tape Reliable for the Long term: Proactive Monitoring Oracle StorageTek Tape Analytics Software • Simplify Tape Management • Tape Analytics monitors all your drives and media so you can focus your resources elsewhere • Leverage Intelligent Analytics • Oracle’s proprietary algorithms provide proactive health indicators that can be trusted • Worry Free Deployment • Tape Analytics gathers performance data through the library without ever entering your live data path • Grow with Peace of Mind • A monitoring application that scales to meet your needs, Tape Analytics supports monitoring multiple globally dispersed libraries from a single interface 17 LTFS WHAT IS IT? WHAT MAKES STORAGE SELF DESCRIBING? When a file and the index that describes that file are stored together INDEX A SELF DESCRIBING STORAGE FORMAT FILE SELF DESCRIBING WHAT IS UNIQUE ABOUT SELF DESCRIBING FILES? WHY DO WE WANT FILES TO BE ACCESSIBLE? TWO REASONS: 1 - Easily share files 2 - Retrieve files in the future Self Describing Files are independent of the OS and the software used to create them WHY DO WE WANT SOFTWARE INDEPENDENCE? Freedom to access files without proprietary software INDEX SOFTWARE DEPENDENT FILE TAR Format on Tape LTFS with LTO …FILE(S) 2 FILE(S) 1 FILE(S) 2… FILE(S) 3 INDEX 3 FILE INDEX INDEX 2 End of Tape Beginning of Tape INDEX 1 FILE LTFS Format on Tape INDEX 2 INDEX 3 FILE INDEX PARTITION FILE PARTITION FILE 2 FILE 1 FILE 3 End of Tape Beginning of Tape INDEX 1 Oracle’s Open Format Software Oracle’s StorageTek LTFS, Open Edition Software • First LTFS Driver To Support: • StorageTek T10000C, LTO-5, & IBM LTO-5 • Oracle’s Driver is Free HP https://oss.oracle.com/projects/ltfs/ So What’s Missing? How Does LTFS-LE Expose a standard Interface to Tape Libraries? APPLICATIONS Visibility Into All The Files in a Tape Archive All File Indices Stored Within LTFS-LE Application POSIX / CIFS / NFS Interface File Index and Files Stored Locally on Every LTFS Tape INDEX 1 INDEX 1 INDEX 2 FILES INDEX 3 LTFS Library Edition Agenda • TAPE • ZFSSA • AXIOM • SAM-QFS ZFS Storage Appliances Fastest Growing Major NAS and Unified Storage Vendor 7420 7120 SINGLE CONTROLLER 7320 48GB DRAM 4 CPU cores 2 PCI slots 73GB of Flash Scales to ~200TB SINGLE OR DUAL CONTROLLERS S7420 ZFS Intelligent Operating System Single storage software. Three unique storage systems. 288GB DRAM 16 processor cores 2 PCI slots Scales to ~6TB Flash Scales to ~500TB 7420 ACTIVE-ACTIVE CONTROLLERS 2TB DRAM 80 processor cores 10 PCI slots Scales to ~15TB Flash Scales to ~3PB Engineered for Extreme Performance. Most Horsepower Possible Dynamic Storage Tiering (HSP) 2TB DRAM 4TB Read Flash 10TB Write Flash 10/15K SAS-2 4 Write SSDs per Tray (max) 80 cores processing power 7.2K SAS-2 10/15K SAS-2 Automated, real-time data migration from DRAM to multi-class flash, to multi-class disk storage. Only software engineered for multi-level flash and disk storage. New New SPC-1 OLTP New New SPC-2 DSS/OLAP NetApp (3270) IBM (V7000) NetApp (3270) Oracle (7320) 3.1GB/s 4.8GB/s 2.5ms response 101,183 134,140 7.4GB/s Oracle (7420) – DECEMBER 2013 Oracle (7320) New Oracle (7420) IBM (V7000) Oracle (6780) IBM (XIV) 17GB/s 325,000 267,000 400,000 15GB/s 275,000 4.3ms response 10.7GB/s 137,066 Oracle (7420) – Oracle (7420) 137,066 137,066 137,066 Oracle (7420) – DECEMBER 2013 53,014 62,261 Oracle (6780) IBM (V7000) 68,035 NetApp (3270) Oracle (7420) Oracle (7420) – JULY Oracle (7420) – DECEMBER 2013 8S ZFS Storage Performance Benchmarks Leading Disk Storage Performance in All Three Industry Benchmarks Sources: SAN: storageperformance.org NAS: spec.org/sfs2008/ SPECsfs Caution: Not for External Us Power of Storage Analytics Real Time Visibility • Based on Solaris Dtrace • Enables Storage Admins to see various statistics / measurements in real time • Provides drilldown analysis • Visibility of who/what is using storage resources • Most powerful tool for troubleshooting and resolving bottlenecks Agenda • TAPE • ZFSSA • AXIOM • SAM-QFS Introducing the Pillar Axiom 600 The Core Value Propositions Pillar Axiom 600 : Oracle’s Axiom lowers IT costs and speeds up ROI with advanced Quality of Service, simple data management, and industry leading utilization and scalability. • Easy to Use Enterprise Storage: Data and storage services that can be promoted or demoted “on the fly” to increase or decrease performance and priority as business and application priorities change. • Scalable and Elastic: Ideal storage platform for virtual infrastructure projects, IT data center consolidation projects, Oracle deployments, and bringing business-critical applications, such as financial and OLTP, online with the highest levels of performance without tradeoffs for capacity utilization. • Industry leading Efficiency: Utilization, Performance, and Protection – tied to unique service levels. Consolidate your applications on a single storage platform. Pillar Axiom 600 Storage System Axiom 600 – one model that linearly scales both capacity and performance SINGLE ACTIVE-ACTIVE SLAMMER with ONE BRICK Up to 4 ACTIVE-ACTIVE 2 Control Units 13 Drives 12TB Capacity SLAMMERS with 64 Bricks 48GB Cache S7420 All models include… 8 Control Units Up to 832 drives Up to 1.6PB Capacity 192GB Cache 128 RAID Controllers SATA, FC, or SSD Storage Classes Patented Quality of Service (QoS) Software All Protocols: FC, iSCSI, CIFS, NFS All Management Software: Multi-Axiom Management, Application Profiles, Thin Provisioning, Storage Domains, Path Management Software Data Protection and Mobility Software: Replication, Volume Copy, Clones Engineered integration with Oracle software: HCC, OEM, ASM, SAM, OVM 29 Top Technology Differentiators Feature Function Quality of Service Application prioritization and contention management that enables multiple applications to efficiently co-exist on the same storage system Modular Architecture Distributed RAID Ability to dynamically scale both performance and capacity by independently adding Slammers (up to 4) and Bricks (up to 64) Achieves superior scalability and performance even during drive rebuilds by moving RAID local to storage enclosures Benefits Applications are assigned I/O resources according to their business value and not relegated to ‘first come first served’ Increases overall efficiency and utilization of the storage system Maximum performance/utilization regardless of size of configuration. Ability to grow and rebalance the storage pool based on changing business environments Ensures predictable performance scaling with capacity add. Higher reliability by localizing the drive rebuild process to the storage enclosure and reducing RAID rebuild window Axiom Architecture Prioritizing the I/O Queue; Breaking the FIFO Model Premium Medium Low Virtual Machine 1 Virtual Machine 2 Virtual Machine 3 Align Virtual TheServer Business Value Of The Application 4 5 7 8 1 3 2 6 9 10 FIFO Queue To I/O Performance Typical Levels Multi-Tier Array Premium Medium Virtual Machine 1 Virtual Machine 2 Low Virtual Machine 3 Virtual Server 4 5 7 8 1 3 2 6 9 10 Premium Priority Queue Medium Priority Queue Low Priority Queue Extending QoS w/ Storage Domains Create up to 64 Physical Domains in a Single Axiom Domain 2 Domain 1 • Refresh Legacy or Aging Bricks without Disruption Easily and non- disruptively evacuate old drives • Isolate Application Data or Workloads to Physical Location Secure a workload to a domain - No comingling of data across media Domain 3 • Separate User Groups or Departments to Physical Location Secure department or user type/role data to a domain • Separate Protocols Isolate SAN and NAS workloads – they can be very different. 32 Modular Scaling with Modular Components Bricks Slammer Slammer Pilot SAM-FS with Axiom Complete Solution with Oracle WebCenter Content WebCenter Content Servers Metadata Server • • • • • QFS File System SAM Copy 3 SAM Policies Metadata Thousands of SAN clients Hundreds of file systems Billions of files Petabytes of disk cache Exabytes of archive SAM SSD Offsite SAM Copies Copy 1 & 2 Primary File System FC SAM Disk Archive Copy SATA Performance Disk Capacity Disk SAM Tape Archive Copy Tape Systems Remote Systems