Dell presentation template Wide screen 16:9 layout

advertisement
Software Defined storage, Big
Data and Ceph.
What is all the fuss about?
Kamesh Pemmaraju, Sr. Product Mgr, Dell
Neil Levine, Dir. of Product Mgmt, Red Hat
OpenStack Summit Atlanta,
May 2014
CEPH
CEPH UNIFIED STORAGE
3
OBJECT
STORAGE
BLOCK
STORAGE
FILE
SYSTEM
S3 & Swift
Snapshots
POSIX
Multi-tenant
Clones
Linux Kernel
Keystone
OpenStack
CIFS/NFS
Geo-Replication
Linux Kernel
HDFS
Native API
iSCSI
Distributed Metadata
Copyright © 2013 by Inktank | Private and Confidential
ARCHITECTURE
APP
4
HOST/VM
Copyright © 2013 by Inktank | Private and Confidential
CLIENT
COMPONENTS
STORAGE CLUSTERS
INTERFACES
S3/SWIFT
5
HOST/HYPERVISOR
OBJECT STORAGE
MONITORS
iSCSI
CIFS/NFS
BLOCK STORAGE
SDK
FILE SYSTEM
OBJECT STORAGE DAEMONS (OSD)
Copyright © 2014 by Inktank | Private and Confidential
THE PRODUCT
INKTANK CEPH ENTERPRISE
WHAT’S INSIDE?
Support Services
Enterprise Plugins (2014)
Calamari
Ceph Object and Ceph Block
7
Copyright © 2013 by Inktank | Private and Confidential
USE CASE: OPENSTACK
9
Copyright © 2013 by Inktank | Private and Confidential
USE CASE: OPENSTACK
10
Volumes
Ephemeral
Copy-on-Write
Snapshots
Copyright © 2013 by Inktank | Private and Confidential
USE CASE: OPENSTACK
11
Copyright © 2013 by Inktank | Private and Confidential
USE CASE: CLOUD STORAGE
S3/Swift
12
S3/Swift
S3/Swift
Copyright © 2013 by Inktank | Private and Confidential
S3/Swift
USE CASE: WEBSCALE APPLICATIONS
Native
Protocol
13
Native
Protocol
Native
Protocol
Copyright © 2013 by Inktank | Private and Confidential
Native
Protocol
ROADMAP
INKTANK CEPH ENTERPRISE
May 2014
14
Q4 2014
Copyright © 2013 by Inktank | Private and Confidential
2015
USE CASE: PERFORMANCE BLOCK
CEPH STORAGE CLUSTER
15
Copyright © 2013 by Inktank | Private and Confidential
USE CASE: PERFORMANCE BLOCK
Read/Write
Read/Write
CEPH STORAGE CLUSTER
16
Copyright © 2013 by Inktank | Private and Confidential
USE CASE: PERFORMANCE BLOCK
Write
Write
Read
Read
CEPH STORAGE CLUSTER
17
Copyright © 2013 by Inktank | Private and Confidential
USE CASE: ARCHIVE / COLD STORAGE
CEPH STORAGE CLUSTER
18
Copyright © 2013 by Inktank | Private and Confidential
ROADMAP
INKTANK CEPH ENTERPRISE
April 2014
19
September 2014
Copyright © 2013 by Inktank | Private and Confidential
2015
USE CASE: DATABASES
Native
Protocol
20
Native
Protocol
Native
Protocol
Copyright © 2013 by Inktank | Private and Confidential
Native
Protocol
USE CASE: HADOOP
Native
Protocol
21
Native
Protocol
Native
Protocol
Copyright © 2013 by Inktank | Private and Confidential
Native
Protocol
INKTANK UNIVERSITY
May 21 – 22
VIRTUAL
Online Training for Cloud
Builders and Storage
Administrators
European Time-zone
PUBLIC
June 4 - 5
US
Instructor led with virtual
lab environment
22
Copyright © 2014 by Inktank | Private and Confidential
Training for Proof of Concept
or Production Users
Time-zone
Ceph Reference
Architectures and case
study
Outline
• Planning your Ceph implementation
• Choosing targets for Ceph deployments
• Reference Architecture Considerations
• Dell Reference Configurations
• Customer Case Study
Planning your Ceph Implementation
•
Business Requirements
–
–
–
–
–
•
Budget considerations, organizational commitment
Avoiding lock-in – use open source and industry standards
Enterprise IT use cases
Cloud applications/XaaS use cases for massive-scale, cost-effective storage
Steady-state vs. Spike data usage
Sizing requirements
–
–
What is the initial storage capacity?
What is the expected growth rate?
• Workload requirements
– Does the workload need high performance or it is more capacity focused?
– What are IOPS/Throughput requirements?
– What type of data will be stored?
– Ephemeral vs. persistent data, Object, Block, File?
How to Choose Targets Use Cases for Ceph
Traditional IT
(traditional NAS)
Cloud
Applications
Virtualization and Private
Ceph
Cloud
Target
NAS & Object
Content Store
(traditional SAN/NAS)
XaaS Content Store
Open Source NAS/Object
(traditional SAN)
XaaS Compute Cloud
Ceph Target
Capacity
High Performance
Open Source Block
Performance
Architectural considerations – Redundancy and
replication considerations
•
Tradeoff between Cost vs. Reliability (use-case dependent)
•
Use the Crush configs to map out your failures domains and performance pools
•
Failure domains
•
Storage pools
•
Plan for failure domains of the monitor nodes
•
Consider failure replacement scenarios, lowered redundancies, and performance
impacts
–
–
–
–
–
–
–
Disk (OSD and OS)
SSD journals
Node
Rack
Site (replication at the RADOS level, Block replication, consider latencies)
SSD pool for higher performance
Capacity pool
Server Considerations
• Storage Node:
– one OSD per HDD, 1 – 2 GB ram, and 1 Gz/core/OSD,
– SSD’s for journaling and for using the tiering feature in Firefly
– Erasure coding will increase useable capacity at the expense of additional compute
load
– SAS JBOD expanders for extra capacity (beware of extra latency and
oversubscribed SAS lanes)
• Monitor nodes (MON): odd number for quorum, services
can be hosted on the storage node for smaller
deployments, but will need dedicated nodes larger
installations
• Dedicated RADOS Gateway nodes for large object store
deployments and for federated gateways for multi-site
Networking Considerations
• Dedicated or Shared network
– Be sure to involve the networking and security teams early when design your
networking options
– Network redundancy considerations
– Dedicated client and OSD networks
– VLAN’s vs. Dedicated switches
– 1 Gbs vs 10 Gbs vs 40 Gbs!
• Networking design
–
–
–
–
Spine and Leaf
Multi-rack
Core fabric connectivity
WAN connectivity and latency issues for multi-site deployments
Ceph additions coming to the Dell Red Hat
OpenStack solution
Benefits
Pilot configuration
•
•
•
•
Rapid on-ramp to OpenStack cloud
Scale up, modular compute and storage blocks
Single point of contact for solution support
Enterprise-grade OpenStack software package
Components
•
•
•
•
•
•
Dell PowerEdge R620/R720/R720XD Servers
Dell Networking S4810/S55 Switches, 10GB
Red Hat Enterprise Linux OpenStack Platform
Dell ProSupport
Dell Professional Services
Avail. w/wo High Availability
Specs at a glance
Storage
bundles
•
•
•
•
•
•
Node 1: Red Hat Openstack Manager
Node 2: OpenStack Controller (2 additional controllers
for HA)
Nodes 3-8: OpenStack Nova Compute
Nodes: 9-11: Ceph 12x3 TB raw storage
Network Switches: Dell Networking S4810/S55
Supports ~ 170-228 virtual machines
Example Ceph Dell Server Configurations
Type
Size
Components
Performance
20 TB
•
R720XD
• 24 GB DRAM
• 10 X 4 TB HDD (data drives)
• 2 X 300 GB SSD (journal)
Capacity
44TB /
105 TB*
•
R720XD
• 64 GB DRAM
• 10 X 4 TB HDD (data drives)
• 2 X 300 GB SSH (journal)
MD1200
• 12 X 4 TB HHD (data drives)
•
Extra Capacity
144 TB /
240 TB*
•
•
R720XD
• 128 GB DRAM
• 12 X 4 TB HDD (data drives)
MD3060e (JBOD)
• 60 X 4 TB HHD (data drives)
What Are We Doing To Enable?
• Dell & Red Hat & Inktank have partnered to bring a complete
Enterprise-grade storage solution for RHEL-OSP + Ceph
• The joint solution provides:
– Co-engineered and validated Reference Architecture
– Pre-configured storage bundles optimized for performance or
storage
– Storage enhancements to existing OpenStack Bundles
– Certification against RHEL-OSP
– Professional Services, Support, and Training
› Collaborative Support for Dell hardware customers
› Deployment services & tools
UAB Case Study
Overcoming a data deluge
Inconsistent data management across research teams hampers productivity
• Growing data sets challenged available resources
• Research data distributed across laptops,
USB drives, local servers, HPC clusters
• Transferring datasets to HPC clusters took too
much time and clogged shared networks
• Distributed data management reduced
researcher productivity and put data at risk
Solution: a storage cloud
Centralized storage cloud based on OpenStack and Ceph
• Flexible, fully open-source infrastructure
based on Dell reference design
− OpenStack, Crowbar and Ceph
− Standard PowerEdge servers and storage
− 400+ TBs at less than 41¢ per gigabyte
• Distributed scale-out storage provisions
capacity from a massive common pool
− Scalable to 5 petabytes
• Data migration to and from HPC clusters via
dedicated 10Gb Ethernet fabric
• Easily extendable framework for developing
and hosting additional services
− Simplified backup service now enabled
“We’ve made it possible for users to
satisfy their own storage needs with
the Dell private cloud, so that their
research is not hampered by IT.”
David L. Shealy, PhD
Faculty Director, Research Computing
Chairman, Dept. of Physics
Building a research cloud
Project goals extend well beyond data management
“We envision the OpenStack-based
cloud to act as the gateway to our
HPC resources, not only as the
purveyor of services we provide, but
also enabling users to build their own
cloud-based services.”
John-Paul Robinson, System Architect
• Designed to support emerging
data-intensive scientific computing paradigm
– 12 x 16-core compute nodes
– 1 TB RAM, 420 TBs storage
– 36 TBs storage attached to each compute node
• Virtual servers and virtual storage meet HPC
− Direct user control over all aspects of the
application environment
− Ample capacity for large research data sets
• Individually customized test/development/
production environments
− Rapid setup and teardown
• Growing set of cloud-based tools & services
− Easily integrate shareware, open source, and
commercial software
Research Computing System (Next Gen)
A cloud-based computing environment with high speed access to
dedicated and dynamic compute resources
UAB Research Network
Cloud services layer
Virtualized server and storage computing cloud
based on OpenStack, Crowbar and Ceph
Open
Stack
node
Open
Stack
node
Open
Stack
node
Open
Stack
node
Open
Stack
node
Open
Stack
node
Open
Stack
node
Open
Stack
node
Open
Stack
node
10Gb Ethernet
HPC
Cluster
DDR Infiniband
HPC
Storage
HPC
Cluster
QDR Infiniband
Open
Stack
node
Open
Stack
node
Open
Stack
node
THANK YOU!
Contact Information
Reach Kamesh and Neil for additional information:
Dell.com/OpenStack
Dell.com/Crowbar
Inktank.com/Dell
Kamesh_Pemmaraju@Dell.com
@kpemmaraju
Neil.Levine@Inktank.com
@neilwlevine
Visit the Dell and Inktank booths in the OpenStack Summit Expo Hall
Download