<Insert Picture Here> LTFS & Storage: Today and Tomorrow Chris Wood Director of Product Management, Axiom LOB Agenda LTFS: Going beyond backup Storage Musings: More than Speeds and Feeds Oracle Plans: Axiom and ZFSSA LTFS Open Format Overview Oracle’s LTFS Offerings • • • • Use Cases LTFS vs. TAR Open Edition Library Edition Open Formats Why Store Data in an Open Format? • How Much Confidence Do You Have in Your Archive Application? • How Do You Get to Your Data if the Application Goes Away? Say after 100 years. • Have You Written Your Data in an Open Format? • Hint: There is no such thing as an Open backup format! Oracle Statement of Direction Oracle is committed to the continued development of archive solutions based on open formats LTFS WHAT IS IT? WHAT MAKES STORAGE SELF DESCRIBING? When a file and the index that describes that file are stored together INDEX A SELF DESCRIBING STORAGE FORMAT SELF DESCRIBING WHAT IS UNIQUE ABOUT SELF DESCRIBING FILES? WHY DO WE WANT FILES TO BE ACCESSIBLE? TWO REASONS: 1 - Easily share files 2 - Retrieve files in the future FILE Self Describing Files are independent of the OS and the software used to create them WHY DO WE WANT SOFTWARE INDEPENDENCE? Freedom to access files without proprietary software INDEX SOFTWARE DEPENDENT FILE Access Files on Tape Just Like Disk & Flash With Open Formats SAME STEPS TO ACCESS FILES To Access Files on Flash To Access Files on Tape (Written in an Open Format) 1. Insert Flash Drive 1. Insert Tape Into Drive 2. Download Driver 2. Download Driver …Access Files …Access Files What are the Open Formats? The Linear Tape File System (LTFS) & TAR are open format specifications for storing files and an index of those files together on tape, just like disk & flash TAR LTFS • Open Open • Self Describing Self Describing • 30+ Years of Development Introduced in 2010 • Standard Standardization in Progress • Multiple Index Access Single Index Access How Does LTFS Store Files & the File Index Together? • Tape must be LTFS formatted • LTFS formatted tapes have 2 partitions •1st partition for the file index •2nd partition for the files LTO-5 was the 1st generation LTO media to support partitions LTO-5 Partitioning with LTFS Indexing INDEX 2 INDEX 3 FILE INDEX PARTITION FILE PARTITION FILE 2 FILE 1 FILE 3 End of Tape Beginning of Tape INDEX 1 TAR Format on Tape Beginning of Tape INDEX 1 …FILE(S) 2 FILE(S) 1 INDEX 3 FILE INDEX INDEX 2 FILE(S) 2… FILE(S) 3 End of Tape LTFS with LTO FILE LTFS Format on Tape INDEX 2 INDEX 3 FILE INDEX PARTITION FILE PARTITION FILE 2 FILE 1 FILE 3 End of Tape Beginning of Tape INDEX 1 Oracle is Driving LTFS Standardization Oracle Named Co-Chair of SNIA Committee Standardizing the LTFS Specification DISK OR FLASH POSIX / CIFS / NFS Interface Oracle’s StorageTek LTFS, Open Edition TAPE APPLICATIONS With LTFS Applications Use Standard Interfaces to Write to Tape Oracle’s Open Format Software Oracle’s StorageTek LTFS, Open Edition Software • First LTFS Driver To Support: • StorageTek T10000C, LTO-5, & IBM LTO-5 • Oracle’s Driver is Free HP https://oss.oracle.com/projects/ltfs/ NDA: Coming 2013 @ OOW: Oracle’s StorageTek LTFS, Library Edition What are the limitations of LTFS, Open Edition? • To see the file metadata, the tape must be mounted • Only one tape can be mounted at a time • You have 10,000 tapes, which one do I want? What Tape is My File On? What is StorageTek LTFS, Library Edition? LTFS, Library Edition provides a file system interface to a tape library through POSIX, CIFS or NFS commands, just like disk or flash But, also addresses the “What Tape(s) are my files on” problem… How Does LTFS-LE Expose a standard Interface to Tape Libraries? APPLICATIONS Visibility Into All The Files in a Tape Archive All File Indices Stored Within LTFS-LE Appliance POSIX / CIFS / NFS Interface File Index and Files Stored Locally on Every LTFS Tape INDEX 1 INDEX 1 INDEX 2 FILES INDEX 3 LTFS Library Edition Summary: • Powerful new tool to enhance the usability of tape. • But, LTFS is just infrastructure. Archives require upper level S/W to ingest, index, move, protect, access etc. etc. • The wide-spread adoption of LTFS will be driven by the ISV community, not by tape vendors. Oracle’s Broomfield, Colorado Campus 225,000 square feet, $70M+ Labs, Over 50 ISV partnerships Agenda LTFS: Going beyond backup Storage Musings: More than Speeds and Feeds Oracle Plans: Axiom and ZFSSA Is it all about Flash? You would sure think so by listening to all the analyst and self-appointed pundits! Sure, it’s great stuff: Eliminated Seek and Latency, helps to close the gap between nano-seconds (Server) and milliseconds (Disks) But it’s pricey (8-10X HDD’s) and it wears out • Flash is not going to catch HDD prices any time soon! Is it all about IOPS, Bandwidth and Capacity? The “My Dad can beat up your Dad” theory of Sales. It used to be, but not anymore • 4 TB 3.5” SAS HDD = 4 Million IBM 3330 disk packs • 16 Gbit FC = 1066 IBM S/370 Block Multiplexor Channels • 1 Million IOPS was impossible 10 years ago, now it’s commonplace Who really gives a damm anymore? So what is it all about? People and Money It’s really about money: • “Stuff” is cheap, people are not. • “Healthcare” for Stuff is a whole lot cheaper than Healthcare for people • “Stuff” does not take vacations, cause HR problems, goof-off, make stupid mistakes etc. The real value add of future storage has to be about maximizing the effectiveness of the most expensive resource: People Wouldn’t it be nice if storage could: Accept a single command from an application and provision itself, locate all paths to the application, constantly self-tune itself as workloads change, do whatever is necessary to protect data, never allow silent corruption etc. etc. This is what Oracle is working on right now. • The power of developing both hardware and software has never been greater than right now. Interesting Comments: “Our goal is to eliminate, to the greatest extent possible, all People from the day to day operation of our data centers. We simply cannot afford the TCO of humans anymore.” Storage CTO Ebay & PayPal “We simply can’t run a competitive service business which relies on significent human interaction to manage the equipment we host for others and/or the equipment we own that hosts our managed service offerings.” SVP - Rackspace The Future? 8.5% Unemployment; the New Normal. “Any Storage sale under a Petabyte should probably go through the Channel.” • Executive at a very large Computer Company “Silicon Valley adding jobs while the rest of California stagnates” • 2012 California “State of the State” report Agenda LTFS: Going beyond backup Storage Musings: More than Speeds and Feeds Oracle Plans: Axiom, ZFSSA & Tape Safe Harbor Statement The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle’s products remains at the sole discretion of Oracle. Oracle Enterprise Tape Technology Roadmap 5 Year Trajectory Tape Capacity 12 – 20x Archive Capacity 14 – 24x In the Lab Delivered SL8500 Enterprise Tape Library SL8500 Enterprise Tape Library 500 PB Capacity 100 PB Capacity T10000C Enterprise Tape Drive T10000B Enterprise Tape Drive 5 TB Capacity 1.4 – 2.4 Exabyte Capacity Gen5 Enterprise Tape Drive 700 – 900 PB Capacity SL8500 Enterprise Tape Library SL8500 Enterprise Tape Library 12 - 20 TB Capacity T10000D Enterprise Tape Drive 8 TB Capacity 1 TB Capacity 2010 2011 2012 2013 2014 2015 Oracle ZFS Storage Roadmap Delivered Planned 7x20 Series Z5-7 2TB DRAM 4TB Read Flash 10TB Write Flash 2.5PB Storage Capacity 2-node Systems Z3-7 Z4-7 1x Faster 3x More Read Flash 2x Faster 2x More Scalable Z3-5 Z5-5 10GB/s 130K IO/s 2x Faster 2x More Scalable 3x More Read Flash 2x Faster 3x More Scalable Z3-3 Z5-3 2x Faster 2x More Scalable Full Redundancy 2x Faster 3x More Scalable Note: ZBA is pre-racked and tested configuration of the Z*-7 system. 2012 7S Storage OS Unified Storage Analytics Hybrid Storage Pool (HSP) Data Services Oracle Eng. Integration (HCC, EM, etc.) 2013 2014 2015 Storage OS Engineered integration and global management • Global Name Space Scale • HSP Advancements • Database Dynamic-Tuning (OISP) 2x Faster 2x More Scalable Storage OS Dynamic management and security • • • • Global Load Balancing Oracle Application Provisioning Database QoS and Analytics (OISP) Secure Multi-tenancy 2016 2017 Storage OS Dynamic, multi-tenant system management • Elastic Scale • Global Analytics • Application Auto-Tuning Oracle Axiom / F1 Flash Storage System Roadmap Delivered Planned F1-102 Axiom 600 2–8 CU 1-64 SSD/HDD Shelves 12 – 832 Drives 2 - 128 RAID Ctlrs 192GB Cache 70k SPC-1 IOPS F1-108 All Oracle Hardware Base 2 CU 1-30 SSD/HDD Shelves 24 - 720 Drives Up to 12 Host 10g/16g Interfaces 80 to 416 GB RAM Cache 650k SPC-1 IOPS Up to 1M IOPS 3PB Capacity 17.4GB/s (Sequential Read) 11.5GB/s (Sequential Write) 2013 R5.x Storage Services • • • • • • • • QoS Storage Domains MaxMan Copy Services Replication Application Profiles Oracle Integration (HCC, EM, etc.) 2 - 16 Control Units 2 - 192 SSD/HDD Shelves 24 - 4608 Drives Up to 96 Host Interfaces 416 to 6656 GB Cache 5.8M SPC-1 IOPS Up to 8M IOPS Hardware Refresh 28PB Capacity 70GB/s (Sequential Read) 45GB/s (Sequential Write) 140GB/s (Sequential Read) 90GB/s (Sequential Write) 20142014 F1 Storage Services OS F1-216 2-8 CUs 2 - 96 SSD/HDD Shelves 24 - 2304 Drives Up to 48 Host Interfaces 416 to 1664GB Cache 2.4M SPC-1 IOPS Up to 4M IOPS IB-Enabled 9.2PB Capacity F1 Storage Services OS 2015 2015 2016 F1 Storage Services OS QoS-Driven Adaptive Tiering and Application Storage Tuning Single Model Agile Scale Out with Infiniband Single Model Expanded Scale Out with Hardware Refresh • Enhanced QoS for NAS + SAN • MaxRep 3.0 w/ new HW • Oracle Business Application Profiles • VMware SRM, VASA, VAAI • Unified (SAN + NAS) in all Systems • Multiprotocol: FC, IP, iSCSI and IB on every system • MaxRep 3.1: One Button Site Failover • Unified (SAN + NAS + IB) • Multi-node Redundancy • Symmetric Cache • Auto Load Balance • TP Space Reclaim