Introduction to Netapp products

advertisement
Six major trends that affect how storage architectures are built and how IT departments deliver
services:
1. Flash
a. Flash technology enables IT admins to improve I/O performance while reducing
cost. Flash technology improves storage architecture performance.
2. Cloud
a. Most companies are implementing cloud technology for their data centers.
Cloud solutions enable IT departments to offer enhanced services to their
business customers.
3. Software defined data center
a. Like cloud technology the software defined data center enables new IT services
to business customers
4. Converged infrastructure
a. As data centers become more complex, IT managers seek easier methods to
integrate data center technologies. Converged infrastructures offer viable
options for these organizations.
5. Mobility
a. As more people depend on mobile devices such as smartphones and tablets, IT
departments must enable users too safely and securely access data anytime,
anywhere.
6. Big data
a. Big data is generated from all parts of an organization. IT departments need new
ways to manage greater amounts of big data and the complex analytics that are
associated with this data.
Shared storage infrastructure
Provide many features, including multi-tenancy, scale out NAS, midsize to enterprise environments
and mission critical SAN.
-
Features of ssi
o Nondisruptive operations
o Proven efficiency
o Seamless scalability
Dedicated storage solutions infrastructure
E-series storage systems and E-series flash array.
The e series is good for demanding workloads, high performance and high capacity.
EF-series: flash based solution for low latency and ultrahigh performance
EF series is good for low latency, ultrahigh performance and enterprise level reliability.
Universal data platform
Because netapp cloud solutions are based on clustered data ONTAP, they provide consistent data
services and simplified data management across clouds.
Dynamic data portability
Integrated data portability technology enables applications and data to move across cloud resources
and providers.
Extensive customer choices
IT organizations can choose from a broad ecosystem of technology solutions and cloud provider
options to meet their unique business requirements.
OnCommand management software
Product
-
-
-
System manager
o Provides device level management of netapp storage systems
Unified manager
o Monitors the availability, capacity, performance, and protection of clustered DATA
ONTAP resources
Workflow automations
o Enables automation and delegation of all repeatable storage management and
storage service tasks
Performance manager
o Provides performance monitoring and root-cause analysis of clustered data ontap
Insight
o Enables storage resource management and advanced reporting for heterogeneous
environments.
Module 2: shared storage solutions on clustered data ontap
Clustered data ontap
Clustered data ontap offers three primary benefits
1. Non-disruptive operations
2. Proven efficiency
Includes SSD, SAS and SATA
3. Seamless scalability
Capabilities of clustered data ontap
-
Raid-DP technology
o All data ontap disks are organized into RAID groups. RAID groups provide parity
protection against data loss. Each RAID group consists of data disks, parity disk( that
is RAID 4), and a dual parity disk ( that is RAID-DP). A double parity RAID group must
contain at least three disks: one or more data disk, a parity disk, and a double parity
disk. If a data disk failure occurs in a RAId group, DATA ontap replaces failed disk
with a spare disk. Data ontap automatically uses parity data to reconstruct the fail
disks data on the replacement disk.
-
-
-
NVRAM
o Nonvolatile ram, or nvram is a component of a FAS controller that contains a built in
batter connection. This connection provides an uninterrupted power supply, so that
data is not lost if the external power source fails. As I.O request come into a system,
they first go to ram. The ram on a netapp system, as in other systems, is where the
data ontap operating systems does active processing. As write requests come in, the
operating system logs them into NVRAM. Operation that is logged into battery
backed ram is safe from controller failure.
WAFL
o The WAFL ( write anywhere file layout) file system defines how blocks are laid out in
the data ontap operating system. The WAFL file system manages reading and writing
data to and from disks. The WAFL file system optimizes writes by collecting data
blocks and grouping them such that the system can write data blocks in any location
on the disk. This placement also optimizes reads. WAFL always writes to the nearest
available free block, which decreases write time. Other file systems write to preallocated locations on disk, which requires time-consuming disk seeks
Snapshot
o
DATA ONTAP storage efficiency
-
-
-
-
-
Data compression
o Data compression process reduces the physical capacity that is required to store
data on a flexvol volume, by eliminating repeated patterns in data blocks. This
process can be used for primary, secondary, and archive storage. Pre-existing data
can be compressed. Compression is completed manually or inline by schedule. This
process is transparent to the application, which means that awareness is not
needed, compression requires deduplication to be enabled on the volume, but does
not require deduplication to be scheduled to run.
Deduplication
o Improves storage efficiency by finding identical blocks of data and replacing them
with references to a single shared block. The same block of data can belong to
several files or LUNs, or it can appear repeatedly within the same file. When
deduplication is turned on, it finds removes duplicate 4-kilobyte blocks that are
stored in the WAFL file system.
Flexcone software
o Reduces storage needs by using snapshot technology to replicate the production
copy as a readable and writeable virtual clone. Flexclone copies are created
instantaneously, thus they have little to no incremental impact on available storage,
even if the copies multiply.
Thin provisioning
o Enables storage to be allocated on an as-needed basis, which streamlines the
provisioning process and removes the guesswork in storage allocation. Thin
provision is a flex vol volume feature of the data ontap operating system. Thin
provision leverages a common pool of storage across all storage devices. The use of
a common pool enables administrators to respond quickly to request by allocating
less space initially and then adding or removing space, depending on how the space
is used and how needs change.
Thin replication
o Simplifies disk based disaster recovery and backup. Thin replication enables storage
administrators to restore full, point in time data at granular levels, unlike competitor
soft. Storage capacity on replication targets is reduced, and storage capacity
requirements are reduced by up to 90%.
+ reduces latency
+ intelligent caching
+ data ontap raid protected aggregates
Net app solutions for oracle database and applications
-
Database solutions
o Core technologies such as thin provisioning and the FlexVol feature make oracle
data management simple and efficient. Netapp snapmanager for oracle and
Flexclone technology simplify and automate the database cloning and refresh
process and require very minimal additional storage. Netapp integrates with oracle
products such as oracle multitenant, oracle enterprise manager, and oracle recovery
manager.
-
-
Application solutions
o Oracle application solutions increase the speed and quality of project development
and deployment. Customers can accelerate their application lifecycle, improve SLAs,
and respond with ease to events, data growth, and changing performance needs.
Manageability solutions
o Netapp provides integration and manageability products for data protection,
development and testing environments, and monitoring. The products ensure better
data availability, efficient use of space, and a well-monitored oracle environement.
Netapp solutions for Microsoft database and applications
-
Microsoft exchange server
Microsoft SharePoint server
Microsoft SQLServer
Data protection for shared storage infrastructures
Data protection copies are put on aggregates of sata disks using raid-dp, data is then mirrored to the
destination cluster at the least active time in the cluster.
-
SnapMirror technology
SnapMirror is done asynchronously, when the scheduler triggers a replication update, a new
snapshot copy is created on the source volume, the new block difference between the new snapshot
copy and the last replicated snapshot copy is determined and then is transferred to the destination
volume. This transfer includes snapshot copies that were replicated between the last snapshot copy
and the new one, when the transfer is complete the new snapshot copy is on the destination
volume. If a disaster occurs the destination takes over operations.
-
SnapVault backups
SnapVault software leverages block-level incremental replication to provide a reliable, low-overhead
backup solution. SnapVault technology provides efficient data protection by copying only the data
blocks that changed since the last backup, instead of copying entire files. As a result, backups are
taken more often while the storage footprint is reduced, because no redundant data is moved or
stored. By default, vault transfers retain storage efficiency on disk and over the network, which
further reduces network traffic. Additional deduplication and compression can be configured on the
destination volume. The SnapVault feature uses two deployment operations: fan-in and fan-out. The
fan-in operation supports up to 225 nodes backing up to one storage system. This deployment is
ideal for centralized data protection between remote and branch offices. The fan-out operation
enables one system to send backups to up to 16 nodes, which is useful for data distribution.
SnapVault software leverages block-level incremental replication to provide a reliable, low-overhead
backup solution. SnapVault technology provides efficient data protection by copying only the data
blocks that changed since the last backup, instead of copying entire files. As a result, backups are
taken more often while the storage footprint is reduced, because no redundant data is moved or
stored. By default, vault transfers retain storage efficiency on disk and over the network, which
further reduces network traffic. Additional deduplication and compression can be configured on the
destination volume. The SnapVault feature uses two deployment operations: fan-in and fan-out. The
fan-in operation supports up to 225 nodes backing up to one storage system. This deployment is
ideal for centralized data protection between remote and branch offices. The fan-out operation
enables one system to send backups to up to 16 nodes, which is useful for data distribution.
Data ONTAP SnapMirror replication technology is used for disaster recovery. The SnapMirror
function creates and updates a single replica of the source volume on a remote secondary server. If
a disaster occurs, a system administrator can break the SnapMirror relationship, convert the
SnapMirror replica to a writable volume, and then redirect users to write to the SnapMirror replica.
Data ONTAP SnapVault is a type of SnapMirror replication that is used to create an archive of daily,
weekly, and monthly read-only copies of the source volume for point-intime recovery. If data is lost,
the storage administrator can retrieve the data from a SnapVault copy on the secondary system back
to the primary system. To use the SnapVault copy as a writable volume, create a FlexClone volume
copy of the SnapVault copy.
MetroCluster software provides nondisruptive operations with zero data loss and near zero recovery
time for unplanned events and disasters. Key features of this product include a simple switchover
and switchback mechanism in case of a disaster and automatic local failover for component-level
failures. MetroCluster software supports data recovery at up to 200 kilometers. This feature
interoperates with SnapMirror software to provide unlimited long-distance disaster recovery and
efficient disk-to-disk backup using the SnapVault solution. Synchronous data replicates across two
Data ONTAP clusters. A high-availability pair is present at each site. For example, in a four-node
configuration, two nodes are active, while the other two nodes are inactive. If a disaster occurs, the
inactive nodes are activated. All active nodes can host or serve data. Node failure is managed locally.
Data is mirrored at the aggregate level using the RAID SyncMirror feature. NVRAM is mirrored to a
high-availability, or HA, partner and a disaster recovery partner. The cluster configuration is mirrored
to the remote site. If a disaster occurs, the switchover is done with one command. The switchback
can be completed by using three commands.
SnapProtect management software is an application-aware and virtualization-aware
centralized backup and replication management tool. SnapProtect management software is
optimized for environments that connect NetApp storage to NetApp storage. SnapProtect software
combines the capabilities of a traditional backup application with NetApp Snapshot and replication
technologies. This solution manages, monitors, and controls Snapshot copies, replication, and tape
operations from a single interface.
Module three
-Dedicated storage solutions
This module describes dedicated storage solutions that are ideal for intensive workloads. You learn
about the E-Series storage system and the EF-Series flash array, FlashRay all-flash storage, and the
StorageGRID Webscale system. All of these solutions are designed for SAN environments. This
module requires approximately 25 minutes.
o
E-series solutions
 E-Series solutions provide superior reliability and a price/performance ratio
that no other product in the storage industry can meet. The E-Series
platforms are ideal for a wide range of customers, from entry-level SAN
environments to big-data, performance-centric SAN environments. The
E2700 platform is targeted at entry-level SAN configurations. The E5600 is
targeted at performance-centric SAN environments.
o
E-series solutions: modular design
 E-Series storage systems use a modular design. Customers can mix and
match disk shelves, controllers, and drives. Because E-Series
implementations enable the creation of custom configurations, E-Series
solutions meet the performance and capacity needs of every customer.
Because all configurations use the same architecture and the same
SANtricity storage management software, customization does not increase
complexity
o
E-SERIES SOLUTIONS: FEATURES AND BENEFITS
 Dynamic disk pools
 Dynamic disk pools (DDP) technology spreads data across all drives
in a pool, which reduces performance bottlenecks due to hot spots.
If a drive failure accurs, DDP enables a quick return to an optimal
state. Drive rebuild times are six times faster while maintaining high
system performance.
 Thin provisioning
 Improves storage utilization up to 35% and reduces over provision.
By setting a starting capacity and a maximum capacity, users can
“set it and forget it” for worry-free business value.
 Enterprise replication
Provides enterprise-class disaster recovery of data. By mirroring over FC or
IP, the E-series systems work together to maintain a consistent set of data
across platforms. Data from a volume on one system is automatically copied
to a volume on a remote system. If a disaster overtakes the local system, the
data is available from the remote site.
 Drive encryption
 Users often are unable to control the security of disk drives, which
can be misplaced, sent off-site for service or repair, or disposed of.
Drive encryption combines local key management and drive-level
encryption for comprehensive data security. Data is protected


-
throughout the lifecycle of the drive, yet storage system
performance and ease of use are unaffected.
Snapshot technology
 Is a capacity-efficient, point-in-time (PiT) image of a volume that is
used for file and volume restoration, or application testing. This
image points back to a base volume, and any request for data from
the images is managed by the data that resides on the base volume.
If thebase volume handles a write that would change the data that is
used by the image, the original data is copied to a small repository
of the actual coapacity that is assigned to the image.
Volume copy
 SANtricity volume copy is a feature that creates an exavty cop ( that
is, a clone) of a base volume. Unlike a snapshot copy, a volume copy
is a full copy of the data set, not just a series of pointers to the
original location. One main use of the volume copy feature is to
create a full PiT data set for analysis minig or testing. Another main
use of the volume copy feature is to redistribute data for
performance and capacity optimization.
E-SERIES SOLUTIONS: DEDICATED WORKLOADS
o E-Series storage systems are designed for specific types of high-performance
computing workloads. For example, E-Series solutions are especially useful for highperformance file systems such as digital video surveillance and big-data analytics.
However, E-Series solutions also work effectively for general-purpose SAN
environments. E-Series solutions enable disk backup, read-intensive transactional
databases, image archiving, and email servers.
E-SERIES SOLUTIONS: OPERATING SYSTEM AND GUI
All E-Series platforms are built on the SANtricity operating system. So, all platforms display
the same SANtricity GUI. Customers can move easily from one platform to another. Customers use
the SANtricity software to set up, configure, and manage their E-Series environments. Because it is a
thin client with low overhead, the SANtricity operating system offers customers a rich feature that
yields a fast customer experience.
E-SERIES DEPLOYMENT: BACKUP TO DISK
Consider an example of how an E-Series deployment solved a customer problem. This customer's
data was growing exponentially. Management costs were dramatically increasing. In addition,
recovery was unreliable and used large amounts of time. The customer purchased the E2700
platform. When the customer used the platform's backup-to-disk solution, recovery times were
reduced and reliability increased. In addition, the E-Series drive encryption feature provided an
additional level of drive security. Data is now protected throughout the lifecycle of the drive, yet
storage system performance and ease of use are unaffected.
E-SERIES DEPLOYMENT: VIDEO SURVEILLANCE
Consider another example of how an E-Series deployment solved a customer problem. This
customer needed a long retention, high-resolution video surveillance solution to process real-time
video applications. The customer realized that higher resolution video required more capacity and
that longer retention times impacted system performance for recalling video. The purchase of an ESeries E5600 system enabled this customer to process real-time video applications with high
reliability, performance, and availability. This customer gained a cost-effective solution that provided
the ability to scale to any desired amount of surveillance content.
Lesson two: Netapp flash technology
Flash technology:
-
EF-Series and SANtricity
o Provides all-flash storage that is built for SAN
o Is field-proven in mission critical environments
-
-
o Excels at price/performance ratios. Low latency, and density
o Leverages applications for data managements and mobility.
FAS and Data ONTAP
o Offers the flexibility of all-flash and hybrid technologies within the same cluster
o Provides infrastructure storage that is built for consolidated workloads
o Enables nondisruptive data mobility between tiers and clouds
o Provides enterprise-class data lifecycle management.
FlashRay and Mars OS
o Is built to improve the economics and performance of flash
o Provides always-on inline efficiencies with adaptable performance
o Leverages and integrates wth the data ONTAP ecosystem
o Provides expanded opportunities in SAN
EF-SERIES
E-Series SAN storage systems provide superior reliability and a price/performance ratio that no
product in the storage industry can meet. The all-flash EF-Series solution is a Solid-state Drive
storage system that leverages the superior performance and reliability of E-Series technology to
deliver extreme performance and reliability. The EF-Series product line consists of the EF560
platform and the EF550 platform. These products are built on three key pillars: low-latency
performance, maximum density, and enterprise-class reliability. When these capabilities are
combined, they enable customers to drive Greater Speed And Responsiveness from their
applications.
EF-SERIES SOLUTION: MODULAR DESIGN
Like E-Series systems, all EF-Series systems use a modular design. Customers can mix and match disk
shelves, controllers, and drives to meet performance and capacity needs. All EF-Series configurations
use the same architecture and the same SANtricity storage management software, thereby, reducing
the time it takes to implement training.
EF-SERIES SOLUTION: DEDICATED WORKLOADS
The EF-Series all-flash array is designed for enterprise customers that want to drive extreme speed
and responsiveness from their I/O-intensive workloads. Workloads that benefit from the EF-Series
solution include transactional workloads (that is, OLTP workloads), such as order entry and financial
transactions. The solution also includes analytics workloads (that is, OLAP workloads), such as
relational databases and data mining. Business intelligence workloads, such as report writing and
benchmarking, are also included in the EF-Series solution.
CUSTOMER DEPLOYMENT: EF-540: ONLINE RETAILER
Consider an example of how an EF-Series deployment solves a customer problem. This global online
retailer required consistent submillisecond latency to accelerate a payment transaction database
and increase customer satisfaction. Because each minute of downtime meant significant lost
revenue to the customer, very high system reliability was also required. This customer purchased the
EF540 flash array, which improved the performance of the Oracle databases that governed
transactions by 20 times what was previously possible. The EF540 occupied just one-quarter of the
previous storage footprint, which resulted in a dramatic 75% reduction in the customer's power and
cooling costs.
CUSTOMER DEPLOYMENT: EF-540: ENGINEERING APPLICATION
Consider another example of how an EF-Series deployment solves a customer problem. This
customer is a contract drilling company that has been a top industry performer for over 90 years.
This customer is committed to maintaining its reputation through innovation and service. Their
clients pay for drilling that is based on oil rig data that is captured or ingested once per minute. The
CIO requested an increase in the ingest rate of oil rig data to once per second, so that more data
would be available to clients. The existing storage was not designed to handle these extremely high
IOPS and sustained bandwidth requirements, so the drilling company purchased the EF540 system.
The EF540 system helped this customer achieve ingest rates of one second. Clients could access realtime drilling analytics such as drilling depth, what the drill hit, and so forth. This information enabled
the clients to make immediate decisions about the drilling they were paying for.
FLASHRAY: ALL-FLASH STORAGE WITH MARS OS
FlashRay is all new flash array that combines the benefits of all-flash storage technology with
patented NetApp software to improve the economics and performance of flash technology. FlashRay
provides breakthrough performance, storage efficiency, data protection, and data management. The
result is a no-compromise approach to all-flash storage that provides more value for the storage
dollar and enables customers to address a wide range of high-performance SAN workloads. FlashRay
technology uses the Mars OS operating system. Mars OS is a new architecture that delivers the
classic NetApp values of efficiency, protection, and data management. Among its innovations are
"always-on" inline efficiency features and a variable length block layout. Together these features
minimize the I/O activity to flash, increase effective capacity, and deliver adaptable performance at
consistent submillisecond latencies. Mars OS is designed to address future needs as well as current
ones, by establishing a foundation to enable tight integration with Data ONTAP, ensure fine-grained
seamless scaling of capacity and performance, and leverage future solid-state technologies to
further drive down the cost of all-flash stora
Lesson three: The storageGRID webscale system.
THE EVOLVING DATA MANAGEMENT CHALLENGE
The Internet of Things not only drives decentralized data creation and consumption and massive
data growth in unstructured data, it also requires efficient data management to balance cost and
performance in the hybrid cloud environment. To learn more, click each challenge. As data is created
and consumed across many sites (in contrast to a more traditional data center setup), IT
departments need to reevaluate how to manage a large amount of data that is spread over different
locations. A solution is to create multisite datastores that bring data closer to its workloads,
applications, and users. The growth in unstructured data drives new requirements for storing and
protecting data. With object-enabled data management, organizations can establish highly granular,
flexible data management policies that determine how data is stored and protected. Such policies
address a wide range of requirements for performance, durability, availability, geo-location, and
longevity. The value of data typically changes over time. So does the cost of storing data in a specific
location or storage technology. These challenges require policy-based data management that
ensures optimal data storage throughout the entire data lifecycle, including storage on-premises and
off-premises and in cloud-based infrastructures.
-
Decentralized data creation and consumption
o
-
-
Multisite topologies are required:
 The primary data center plus the disaster recovery site are replaced by
multisite datastores
 Users, workloads, and data are brought closer together.
Unstrucutred data growth
o Object-enabled data management is required:
 Intelligent data placement is enabled
 A wide range of performance, durability, availability, geo-location, and
longevity is required
Cost and performance balance in the hybrid cloud
o Policy based data management is required:
 Data must be placed optimally throughout the data lifecycle
 Onpremise and cloud base storage must be dynamically leveraged over
time.e
THE STORAGEGRID WEBSCALE SYSTEM
NetApp delivers on the promise of object storage today, addressing the challenges of the most
demanding environments with the StorageGRID Webscale system. The StorageGRID Webscale
system is a massively scalable, software-defined storage solution for large archives, media
repositories, and web datastores. The StorageGRID Webscale system is a distributed storage system
that is based on object storage. It supports the StorageGRID API (or SGAPI) and standard RESTful
protocols, such as the Cloud Data Management Interface, or CDMI, protocol, and Amazon Simple
Storage Service (otherwise known as S3). These protocols ensure compatibility with cloud
applications. The system can scale to a large number of objects that are distributed across many
data centers. Because the StorageGRID Webscale system is softwaredefined, it gives customers the
flexibility to choose their ideal storage hardware, including disk and tape. The StorageGRID Webscale
system incorporates a highly dynamic policy engine that determines exact "cradle to grave" data
placement for data based on cost, performance, durability, availability, and longevity requirements,
and that enforces compliance with required policies.
THE STORAGEGRID WEBSCALE SYSTEM: FEATURES AND BENEFITS
A key element of the unique StorageGRID Webscale storage system is its data durability framework,
which establishes both data integrity and data availability. To learn more, click each item. As an
object is ingested, the StorageGRID Webscale system creates a digital fingerprint. The system offers
multiple interlocking layers of integrity protection such as hashes, checksums, and authentications.
Object integrity is verified upon ingest, retrieval, replication migration, and at rest. Suspect objects
are automatically regenerated. Object availability functions include a fault-tolerant architecture that
delivers nondisruptive operations and infrastructure refreshes, with load balancing during normal
and degraded conditions. In addition, the NetApp AutoSupport tool can be set up to provide NetApp
support and weekly updates on the health and status of the system. The NetApp AutoSupport tool
enables users to proactively troubleshoot issues that arise. The AutoSupport tool is available for
other NetApp products, such as the E-Series storage platform. When it is deployed on E-Series
storage systems, the StorageGRID Webscale system takes advantage of features such as Dynamic
Disk Pools technology for node-level erasure coding. Such features enable highly efficient
deployments, because single component failures (such as disk drive failure) won’t affect overall
system functionality. Data management is another key element of the StorageGRID Webscale
system. To learn more, click each item. As part of the data management framework, the
StorageGRID Webscale system supports multiple deployment models. These models range from
virtualized software deployments in on-premise or off-premise private clouds to support for thirdparty storage systems and NetApp E-Series storage systems. The StorageGRID Webscale system
supports both S3 and CDMI protocols for seamless integration of cloud applications. The
StorageGRID Webscale system's dynamic policy engine, which is an information lifecycle
management, or ILM policy, executes ILM rules against objects at ingest as well as at rest. These ILM
rules can be configured based on factors such as resource availability and latency, data retention and
geographical location requirements, and network cost. The ILM policy and ILM rules for the system
are automatically reevaluated, and the system brings objects into compliance.
-
-
Data durability
o Data integrity
 Digital fingerprint
 Tamper detection
 Integrity verification in all critical operations
 Automatic regeneration of suspect objects
o Data availability
 A fault tolerant architecture supports nondisruptive operations, upgrades,
and infrastructure refreshes
 Load balancing automatically distributes workloads during normal
operations and failures
 The Netapp autosupport tool automatically alerts the Netapp support site
for proactive issue resoloution
 Node level erasure coding uses E-Series dynamic disk pools technology to
improve single node availability.
Data management
o Multiple deployment models
 Virtualized software deployments that support on-premise and hosted
environments


o
Available on third party storage systems
Performance, density, and efficiency benefits when deployed on E-Series
storage systems
 S3 and CDMI protocol support for cloud applications
Dynamic policy engine
 User metadata drives placement policy
 The system evaluates placement policy at ingest or access
 Policies are defined by these factors:
 Resource availiability and latency
 Data retention requirements
 Geographical location requirements
 Network cost
 Policy changes are applied retroactively
THE STORAGEGRID WEBSCALE SYSTEM: BENEFITS FOR ALL PARTIES
The StorageGRID Webscale system is designed to ensure that all parties successfully fulfill their
business requirements.
-
-
-
IT benefit
o Store and easily manage large amounts of data
o Store 100 billion objects in a single, elastic content store:
 Customizable ILM rules determine the physical location of data across
locations and storage tiers
 A single software layer across hardware products ensures consistent
management and flexible deployment options
 Simplified provision of storage for applications is provided
Application owner benefit
o Seamlessly run cloud applications and traditional applications
 Support for industry-standard S3 and CDMI protocols enables cloud
applications to run seamlessly on-premises or in hosted environments
 Compliance and physical location rules are easily executed through an ILM
policy that is triggered by object metadata
 Global access is available to the application data cloud
 Simple development of cloud applications is enable by using S3 or CDMI
Purchase Decision-Maker Benefit
o Easily move data to the most cost-effective platform
 As cost models change, organizations can easily adjust and implement
policies that govern object placement across geographically distributed
regions and storage tiers, including disk and tape.
 The StorageGRID webscale system reduces the complexity and cost of
technology refreshes while presvering data durability.
THE STORAGEGRID WEBSCALE SYSTEM: GRID DEPLOYMENT TOPOLOGIES
The StorageGRID Webscale system can be adapted to meet a variety of topologies and use cases,
from a single data center site to multiple geographically distributed data center sites. Generally, the
choice of a deployment topology is based on the unique object replication and protection
requirements of each StorageGRID Webscale system. In a single data center site deployment, the
infrastructure and operations of the StorageGRID Webscale system can be centralized in a single
site. In a deployment with multiple data center sites, the infrastructure of the StorageGRID Webscale
system can be asymmetrical across data center sites and proportional to the needs of each data
center site. Typically, data center sites are located in different geographic locations. Data sharing
and disaster recovery are achieved in a peer-to-peer delivery model by automatically distributing
data to other sites. Each data center site acts as a disaster recovery site for another data center site.
Module Four
Download