ppt

advertisement
Dynamo: Amazon's Highly
Available Key-value Store
Guiseppe DeCandia, Deniz Hastorun,
Madan Jampani, Gunavardhan Kakulapati,
Avinash Lakshman, Alex Pilchin,
Swami Sivasubramanian, Peter Vosshall,
and Werner Vogels
Cloud Data Services & Relational
Database Systems go hand in hand




Oracle, Microsoft SQL Server and even MySQL have traditionally
powered enterprise and online data clouds
Clustered - Traditional Enterprise RDBMS provide the ability to
cluster and replicate data over multiple servers – providing reliability
Highly Available – Provide Synchronization (“Always Consistent”),
Load-Balancing and High-Availability features to provide nearly
100% Service Uptime
Structured Querying – Allow for complex data models and
structured querying – It is possible to off-load much of data
processing and manipulation to the back-end database

Traditional RDBMS clouds are:
EXPENSIVE!
To maintain, license and store large amounts of data




The service guarantees of traditional enterprise relational
databases like Oracle, put high overheads on the cloud
Complex data models make the cloud more expensive to maintain,
update and keep synchronized
Load distribution often requires expensive networking equipment
To maintain the “elasticity” of the cloud, often requires expensive
upgrades to the network
The Solution

Downgrade some of the service guarantees of traditional
RDBMS

Replace the highly complex data models Oracle and SQL Server offer, with a simpler one – This
means classifying service data models based on the complexity of the data model they may
required

Replace the “Always Consistent” guarantee synchronization model with an “Eventually
Consistent” model – This means classifying services based on how “updated” its data set must
be
Redesign or distinguish between services that require a simpler data model
and lower expectations on consistency.
We could then offer something different from traditional RDBMS!
The Solution

Amazon’s Dynamo – Used by Amazon’s EC2 Cloud Hosting Service. Powers their
Elastic Storage Service called S2 as well as their E-commerce platform
Offers a simple Primary-key based data model. Stores vast amounts of information on distributed, lowcost virtualized nodes

Google’s BigTable – Google’s principle data cloud, for their services – Uses a more
complex column-family data model compared to Dynamo, yet much simpler than traditional RMDBS
Google’s underlying file-system provides the distributed architecture on low-cost nodes

Facebook’s Cassandra – Facebook’s principle data cloud, for their services.
This project was recently open-sourced. Provides a data-model similar to Google’s BigTable, but the
distributed characteristics of Amazon’s Dynamo
What is Dynamo

Dynamo is a highly available distributed keyvalue storage system

put(), get() interface

Sacrifices consistency for availability

Provides storage for some of Amazon's key
products (e.g., shopping carts, best seller lists, etc.)

Uses “synthesis of well known techniques to
achieve scalability and availability”

Consistent hashing, object versioning, conflict resolution,
etc.
Scale


Amazon is busy during the holidays

Shopping cart: tens of millions of requests for 3
million checkouts in a single day

Session state system: 100,000s of concurrently
active sessions
Failure is common

Small but significant number of server and network
failures at all times

“Customers should be able to view and add items to their shopping cart
even if disks are failing, network routes are flapping, or data centers are
being destroyed by tornados.”
Flexibility



Minimal need for manual administration
Nodes can be added or removed without
manual partitioning or redistribution
Apps can control availability, consistency, costeffectiveness, performance

Can developers know this up front?

Can it be changed over time?
System Assumptions & Requirements

Query Model: simple read and write operations to a data item
that is uniquely identified by a key.


values are small (<1MB) binary objects
No ACID Properties: Atomicity, Consistency, Isolation,
Durability.

Efficiency: latency requirements which are in general measured
at the 99.9th percentile of the distribution.

Other Assumptions: operation environment is assumed to
be non-hostile and there are no security related requirements such as
authentication and authorization.
Service level agreements

SLAs are used widely at Amazon

Sub-services must meet strict SLAs


e.g., 300ms response time for 99.9%
of requests at peak load of 500
requests/s

Average-case SLAs are not good
enough

Mentioned a cost-benefit analysis that
said 99.9% is the right number
Rendering a single page can make
requests to 150 services
Service-oriented architecture of
Amazon’s platform
Sacrifice strong consistency for availability

Eventual consistency

“Always writable”


Can always write to shopping cart

Pushes conflict resolution to reads
Application-driven conflict resolution

e.g., merge conflicting shopping carts

Or Dynamo enforces last-writer-wins

How often does this work?
Other Design Principles

Incremental scalability


Symmetry


No master/slave nodes
Decentralized


Minimal management overhead
Centralized control leads to too many failures
Heterogeneity

Exploit capabilities of different nodes
Summary of techniques used in Dynamo
and their advantages
Problem
Technique
Advantage
Partitioning
Consistent Hashing
Incremental Scalability
High Availability for writes
Vector clocks with reconciliation
during reads
Version size is decoupled from
update rates.
Handling temporary failures
Sloppy Quorum and hinted handoff
Provides high availability and
durability guarantee when some of
the replicas are not available.
Recovering from permanent failures
Anti-entropy using Merkle trees
Membership and failure detection
Gossip-based membership protocol
and failure detection.
Synchronizes divergent replicas in
the background.
Preserves symmetry and avoids
having a centralized registry for
storing membership and node
liveness information.
Interface

get(key) returns object replica(s) for key, plus a
context object


context encodes metadata, opaque to caller
put(key, context, object) stores object
Variant of consistent hashing
Key K
A
G
Each node is
assigned to
multiple points
in the ring
(e.g., B, C, D
store keyrange
(A, B)
B
F
C
E
D
# of points can
be assigned based
on node’s capacity
If node becomes
unavailable, load is
distributed to others
Replication
Key K
Coordinator for key K
A
G
B
F
B maintains a preference
list for each data item
specifying nodes storing
that item
C
E
D
Preference list skips
virtual nodes in favor of
physical nodes
D stores (A, B], (B, C], (C, D]
Data versioning

put() can return before update is applied to all replicas

Subsequent get()s can return older versions


This is okay for shopping carts

Branched versions are collapsed

Deleted items can resurface
A vector clock is associated with each object version

Comparing vector clocks can determine whether two
versions are parallel branches or causally ordered

Vector clocks passed by the context object in get()/put()

Application must maintain this metadata?
Vector Clock



A vector clock is a list of (node, counter) pairs.
Every version of every object is associated with
one vector clock.
If the counters on the first object’s clock are
less-than-or-equal to all of the nodes in the
second clock, then the first is an ancestor of the
second and can be forgotten.
Vector clock example
“Quorum-likeness”

get() & put() driven by two parameters:

R: the minimum number of replicas to read

W: the minimum number of replicas to write

R + W > N yields a “quorum-like” system

Latency is dictated by the slowest R (or W) replicas

Sloppy quorum to tolerate failures

Replicas can be stored on healthy nodes downstream in the
ring, with metadata specifying that the replica should be sent
to the intended recipient later
Adding and removing nodes


Explicit commands issued via CLI or browser
Gossip-style protocol propagates changes
among nodes

New node chooses virtual nodes in the hash space
Implementation

Persistent store either Berkeley DB
Transactional Data Store, BDB Java Edition,
MySQL, or in-memory buffer w/ persistent
backend

All in Java!

Common N, R, W setting is (3, 2, 2)

Results are from several hundred nodes
configured as (3, 2, 2)

Not clear whether they run in a single datacenter…
One tick
= 12 hours
One tick
= 1 hour
During periods of high load
popular objects dominate
One tick
= 30 minutes
During periods of low load,
fewer popular objects are accessed
Quantifying divergent versions


In a 24 hour trace

99.94% of requests saw exactly one version

0.00057% received 2 versions

0.00047% received 3 versions

0.00009% received 4 versions
Experience showed that diversion came usually
from concurrent writers due to automated client
programs (robots), not humans
Conclusions

Scalable:


Simple:


Apps can set N, R, W to match their needs
Inflexible:




get()/put() maps well to Amazon’s workload
Flexible:


Easy to shovel in more capacity at Christmas
Apps have to set N, R, W to match their needs
Apps may have to do their own conflict resolution
They claim it’s easy to set these – does this mean that there aren’t many
interesting points?
Interesting?
Download