Synthesis of KOdiA-PC, CEP-DPPE, CPP-DPPE, and CHP-DPPE

advertisement
The Google File System
CSC 456 Operating Systems
Seminar Presentation (11/13/2012)
Leon Weingard, Liang Xin
Outline
1. Background
2. Distributed File Systems Overview
3. GFS Architecture
4. Fault Tolerance
Background - Introduction
Introduction
The largest cluster to date provides hundreds of terabytes
of storage across thousands of disks on over a thousand
machines, and it is concurrently accessed by hundreds of
clients.
•
•
•
•
Google – search engine.
Applications process lots of data.
Need good file system.
Solution: Google File System (GFS).
Background - Motivation
Goal:
Large, distributed, highly fault tolerant file system
1.
2.
3.
4.
Fault-tolerance and auto-recovery need to be built into the system.
Standard I/O assumptions (e.g. block size) have to be re-examined.
Record appends are the prevalent form of writing.
Google applications and GFS should be co-designed.
Distributed File Systems
In computing, a distributed file system is any file system
that allows access to files from multiple hosts sharing via a
computer network.
Performance Measurement of a DFS depends on :
• The amount of time needed to satisfy service requests.
• The multiplicity and dispersion of its servers and storage devices
should be made invisible.
Network File System (NFS)
Network File System (NFS) is a distributed file system protocol originally developed by
Sun Microsystems in 1984.
• Architecture:
– NFS as collection of protocols the provide clients with a distributed
file system.
– Remote Access Model (as opposed to Upload/Download Model)
– Every machine can be both a client and a server.
– Servers export directories for access by remote clients
– Clients access exported directories by mounting them remotely.
• Protocols:
– mounting
– file and directory access
 Workgroup network file service
 Any Unix machine can be a server (easily)
 Machines can be both client & server
 My files on my disk, your files on your disk
 Everybody in group can access all files
 Serious trust, scaling problems
The Andrew File System (AFS)
The Andrew File System was introduced by researchers at Carnegie-Mellon University
in the 1980’s.
The basic tenets of all versions of AFS is whole-file caching on the local disk of the
client machine that is accessing a file.
Goal: Scalability
Features:






Uniform name space
Location independent file sharing
Client side caching with cache consistency
Secure authentication by Kerberos
Scalability
“No write conflict” model only partial success
All the files in AFS are distributed among the servers. The set of files in one server is
referred to as a volume. In case a request can not be satisfied from this set of files,
the vice server informs the client where it can find the required file.
Differences between AFS and NFS
 NFS
 AFS
 Distributed with OS
 Add-in product
 client side cache is optional
 client side caching is standard
 clear-text passwords on net
 authenticated challenge on net
 Does not scale well
 scales well
 Uses standard UNIX
 Uses Access Control Lists
permissions
 Not secure
 More reliable than AFS?
(ACL’s)
 More secure than NFS
 Less reliable than NFS?
GFS Architecture
In the GFS:
A master process maintains the metadata.
A lower layer (i.e. a set of chunkservers)
stores the data in units called “chunks”.
GFS Architecture
What is a master?
A single process running on a separate machine.
• Stores all metadata
• File namespace
• File to chunk mappings
• Chunk location information
• Access control information
• Chunk version numbers
…
GFS Architecture
What is a chunk?
Analogous to block, except larger.
Size: 64 MB!
Stored on chunkserver as file
Chunk handle (~ chunk file name) used to reference chunk.
Chunk replicated across multiple chunkservers
Note: There are hundreds of chunkservers in a
GFS cluster distributed over multiple racks.
Read Algorithm
Read Algorithm
Primary and Leases
 Master grants a chunk lease to one replica.
 Replica is called the primary.
 Primary picks an order for mutations to the chunk.
 Leases expire in 60 seconds.
 Primary can request an extension if chunk is being mutated.
 Master can revoke a lease before it expires.
 Master can assign a new lease if it loses contact with the primary.
Write Algorithm
Write Algorithm
Write Algorithm
Write Algorithm
Fault Tolerance: Chunks
Fast Recovery: master and chunkservers are designed to restart
and restore state in a few seconds.
No persistent log of chunk location in the master.
Syncing chunkservers and master is unnecessary under this model.
Alternate view: Chunkservers hold final say on what chunks it has on
its disk.
Chunk Replication: across multiple machines, across multiple
racks.
Fault Tolerance: Master
Data structures are kept in memory, must be able to recover from
system failure.
Operation Log: Log of all changes made to metadata.
Log records batched before flushed to disk.
Checkpoints of state when log grows to big.
Log and latest checkpoint used to recover state.
Log and checkpoints replicated on multiple machines.
Master state is replicated on multiple machines.
“Shadow” masters for reading data if “real” master is down.
Data integrity:
Each chunk has an associated checksum.
summary
Distributed File Systems
• NFS
• AFS
Differences between AFS and NFS
GFS Architecture
Single master with metadata and many chunk servers.
GFS designed to be fault tolerant. Master must be able to
recover. chunk locations not persistent
Most operations are reads and appends so record append
added so append operations can be done efficiently
 (These slides modified from Alex Moshchuk,
University of Washington – used during Google
lecture series.)
 References
 1. The Google File System, SOSP ’03
 2. Presentation slide of GFS in SOSP ’03
 3. www.delmarlearning.com/companions/content/.../Ch18.ppt
 4. lyle.smu.edu/cse/8343/fall_2003/.../DFS_Presentation.ppt
Thank you!
Download