XtremIO Data Protection (XDP) Explained

XtremIO Data
Protection (XDP)
Explained
View this presentation in Slide Show mode
© Copyright 2012 EMC Corporation. All rights reserved.
1
XDP Benefits
 Combines the best traits of traditional RAID with none of its
drawbacks
 Ultra-low 8% fixed capacity overhead
 No RAID levels, stripe sizes, chunk sizes, etc.
 High levels of data protection
–
–
Sustains up to two simultaneous failures per DAE*
Multiple consecutive failures (with adequate free capacity)
 “Hot Space” - spare capacity is distributed (no hot spares)
 Rapid rebuild times
 Superior flash endurance
 Predictable, consistent, sub-millisecond performance
*v2.2 encodes data for N+2 redundancy and supports a single rebuild per DAE. A future XIOS release will add double concurrent rebuild support.
© Copyright 2012 EMC Corporation. All rights reserved.
2
XDP Stripe – Logical View
2 Parity columns
C1
C2
C3
C4
C5
C6
C7
6 Data rows
P
Q
P1
Q1
P2
Q2
P3
Q3
P4
Q4
P5
Q5
P6
Q6
TheQ
P following
– is a column
slidesthat
show a
simplified
contains
contains
parity
example
parity
per
per
ofrow
XDP.
In reality,
diagonal.
XDP uses a
(23+2) x 28 stripe.
Q7
4K
7 Data columns
Every block in
the XDP stripe is
4KB in size.
© Copyright 2012 EMC Corporation. All rights reserved.
3
Physical View
C1
C2
C3
C4
C5
C6
C7
P
TheAlthough
system has
eachthe
abilitycolumn
to readisor
write
represented
in granularity
in this
of 4KB
diagram
or less
as a
logical block,
Q
Stripe’s
columns
are
Each SSD
contains
randomly
distributed
the
same numbers
of
across
the
SSDs to
P and Q
columns
avoid hot spots and
congestion
© Copyright 2012 EMC Corporation. All rights reserved.
C
4
SSD Failure
C1
C2
C3
C4
C5
C6
C7
P
Q
Remaining
data
blocks
aretwo
recovered
The
data
is
recovered,
using
the
If
the
SSD
where
C1
is
stored
has
XDP
always
reads
the
first
XDP
minimizes
reads
required
to
The
system
reads
the
rest
of
the
Next,
Expedited
XDP
recovers
recovery
data
process
using
the
using
the
diagonal
parity,
blocks
Q
parity
and
data
blocks
from
failed,
let’s
see
how
XDP
efficiently
rows
in
a
stripe
and
recovers
C1’s
recover
data
by
25%
(30
vs.
42)
diagonal
data
(columns
C5,
C6and
and
diagonal
completes
parity
with
Q.
fewer
Itstored
first
reads
reads
the
previously
read
and
in
the
C2
and
C3
that
are
already
in
recovers
the
stripe
blocks
using
row parity
atC1
P
increasing
rebuild
performance
C7),
and
computes
the stored
value
of
parity
parity
information
compute
cycles.
from
row
Q
controller
memory,
along
with
minimal
the
Storagewith
Controller
memory
compared
traditional
RAID.
reads from SSD
Controller Memory
C1
Number of
Writes
2
1
3
4
6
5
C2
C3
C4
C5
C6
C7
P
Q
Number of
Reads
14
22
15
26
24
23
19
18
30
27
7
© Copyright 2012 EMC Corporation. All rights reserved.
5
XDP Rebuilds & Hot Space
Allows SSDs to fail-in-place
 Rapid rebuilds
 No performance impact after rebuild completes for up to five failed
SSDs per X-Brick
3 failed SSDs
~330K IOPS
© Copyright 2012 EMC Corporation. All rights reserved.
4 failed SSDs
~330K IOPS
5 failed SSDs
~330K IOPS
6
Stripe update at 80% utilization
•
•
••
•
•
•
Example shows an
new
I/Os
Diagram
array
overwriting
with
that
is 80% addresses
full
existing data – there is no net
increase in capacity
The
system(space
ranks stripes
consumed
frees up in
according
to
utilization
level
other stripes)
Always
to the
At leastwrites
one stripe
is stripe
that
is most to
free
guaranteed
be 40% empty
Writes
to SSD
as soon
=> hosts
benefit
from as
the
enough
blocks
arrive
to
fill
performance of a 40% empty
the
entire
stripe
in
array
vs. aemptiest
20% empty
array
the
system (in
this according
example
Re-ranking
stripes
17
blocks
areblocks
required)
to %
of free
Subsequent updates are
performed using this
algorithm
© Copyright 2012 EMC Corporation. All rights reserved.
Stripe
S9
number
Stripe
Number
% Free
Blocks
S3
S9
0%
S8
0%
S2
0%
S6
20%
S5
20%
S1
20%
S9
S7
40%
S7
S4
40%
S4
S3
40%
% Free
Blocks
40%
0%
S8
0%
S7
40%
S6
20%
S5
20%
S4
40%
S3
40%
0%
S2
0%
S1
20%
7
XDP Stripe - the Real Numbers
Number of 4KB
data blocks in a
stripe
Amount of
data in a
stripe
Amount of Parity
blocks in a stripe
Total number of blocks
in a stripe
Total number of stripes
in one X-Brick
28 X 23 = 644
4KB X
644=
2576KB
28 + 29 = 57
644 + 57 = 701
7.5TB per XBrick/2,024KB
≈ 3M stripes
23 data columns
P Q
28 data rows
RAID Overhead (of P,Q) =
57/701 = 8%
Parity
25 SSDs
© Copyright 2012 EMC Corporation. All rights reserved.
8
Update Overhead Compared
RAID Scheme
Reads
per Update
Writes
per Update
Capacity
Overhead
RAID-5
2
2
N+1
RAID-6
3
3
N+2
RAID-1
0
2
N×2
XtremIO (at 80%)
1.22
1.22
N+2
XtremIO (at 90%)
1.44
1.44
N+2
© Copyright 2012 EMC Corporation. All rights reserved.
9