M. Goenka - Center for Software Engineering

advertisement
The Hadoop Distributed File System, by Dhyuba Borthakur
and Related Work
Presented by Mohit Goenka
The Hadoop Distributed File
System: Architecture and Design
Requirement
• Need to process Multi Petabyte
Datasets
SECTION TITLE
• Expensive to build reliability in each
application.
• Nodes fail every day
• Need common infrastructure
Introduction
• HDFS, Hadoop Distributed File
System is designed to run on
commodity hardware
SECTION TITLE
• Built out by brilliant engineers and
contributors from Yahoo, and
Facebook and Cloudera and other
companies
• Has grown into really large project
at
Apache
with
significant
ecosystem
Commodity Hardware
SECTION TITLE
• Typically in 2 level architecture
– Nodes are commodity PCs
– 30-40 nodes/rack
– Uplink from rack is 3-4 gigabit
– Rack-internal is 1 gigabit
Goals
• Very Large Distributed File System
– 10K nodes, 100 million files, 10 PB
• Assumes Commodity Hardware
SECTION
– Files are replicated to
handle TITLE
hardware
failure
– Detect failures and recovers from them
• Optimized for Batch Processing
– Data locations exposed so that
computations can move to where data
resides
– Provides very high aggregate bandwidth
• User Space, runs on heterogeneous OS
HDFS Basic Architecture
Cluster Membership
NameNode
Secondary
NameNode
SECTION TITLE
Client
Cluster Membership
NameNode : Maps a file to a file-id and list of MapNodes
DataNode : Maps a block-id to a physical location on disk
SecondaryNameNode: Periodic merge of Transaction log
DataNodes
Distributed File System
• Single Namespace for entire cluster
• Data Coherency
– Write-once-read-many access model
TITLE
– Client can only appendSECTION
to existing
files
• Files are broken up into blocks
– Typically 128 MB block size
– Each block replicated on multiple
DataNodes
• Intelligent Client
– Client can find location of blocks
– Client accesses data directly from
DataNode
HDFS Core Architecture
SECTION TITLE
NameNode Metadata
• Meta-data in Memory
– The entire metadata is in main memory
– No demand paging of meta-data
SECTION TITLE
• Types of Metadata
– List of files
– List of Blocks for each file
– List of DataNodes for each block
– File attributes, e.g creation time,
replication factor
• A Transaction Log
– Records file creations, file deletions. etc
Data Node
• A Block Server
– Stores data in the local file system (e.g.
ext3)
– Stores meta-data of a block
(e.g.TITLE
CRC)
SECTION
– Serves data and meta-data to Clients
• Block Report
– Periodically sends a report of all existing
blocks to the NameNode
• Facilitates Pipelining of Data
– Forwards data to other specified
DataNodes
Block Placement
• Current Strategy
- One replica on local node
- Second replica on a remote
rack TITLE
SECTION
- Third replica on same remote rack
- Additional replicas are randomly placed
• Clients read from nearest replica
• Would like to make this policy
pluggable
Data Correctness
• Use Checksums to validate data
– Use CRC32
• File Creation
SECTION TITLE
– Client computes checksum per 512 byte
– DataNode stores the checksum
• File access
– Client retrieves the data and checksum
from DataNode
– If Validation fails, Client tries other
replicas
NameNode Failure
• A single point of failure
• Transaction Log stored in multiple
directories
SECTION TITLE
- A directory on the local file system
- A directory on a remote file system
(NFS/CIFS)
• Need to develop a real HA solution
Data Pipelining
• Client retrieves a list of DataNodes
on which to place replicas of a block
• Client writes blockSECTION
to the
first
TITLE
DataNode
• The first DataNode forwards the
data to the next DataNode in the
Pipeline
• When all replicas are written, the
Client moves on to write the next
block in file
Rebalancer
• Goal: % disk full on DataNodes should
be similar
– Usually run when new DataNodes are
SECTION TITLE
added
– Cluster is online when Rebalancer is
active
– Rebalancer is throttled to avoid
network congestion
– Command line tool
Hadoop Map / Reduce
• The Map-Reduce programming model
– Framework for distributed processing of
large data sets
– Pluggable user code runs in generic
SECTION TITLE
framework
• Common design pattern in data
processing
cat * | grep | sort
| unique -c | cat > file
input | map | shuffle | reduce | output
• Natural for:
– Log processing
– Web search indexing
– Ad-hoc queries
Data Flow
Web Servers
Scribe Servers
SECTION TITLE
Network
Storage
Oracle RAC
Hadoop Cluster
MySQL
Basic Operations
• Listing files
- ./bin/hadoop fs –ls
• Writing files
SECTION TITLE
- ./bin/hadoop fs –put
• Running Map Reduce Jobs
- mkdir input
- cp conf/*.xml input
- cat output/*
Hadoop Ecosystem Projects
• HBase
- Big Table
• HIVE
SECTION
TITLE
- Built on Facebook, provides
SQL interface
• Chukwa
- Log Processing
• Pig
- Scientific data analysis language
• Zookeeper
- Distributed Systems management
Limitatons
• The gigabytes to terabytes of data
this system handles can only be
scaled down to limited threshold
SECTION TITLE
• Due to this threshold being very
high, the system is limited in a lot of
ways
• It hampers the efficiency of the
system during large computations
or parallel data exchange
JSON Interface to Control HDFS
An Open Source Project
by Mohit Goenka
JSON Interface to Control HDFS
An Open Source Project by
Mohit Goenka
JSON
• JSON (JavaScript Object Notation)
is a lightweight data-interchange
format
TITLE
• Can be easily read SECTION
and written
by
humans
• Can be easily parsed by machines
• Written in text format
• Similar conventions as existing
programming languages
JSON Data
• It is based on two structures:
-
A collection of name/value pairs
An ordered list of values
SECTION TITLE
• Concept: Use the light-weighted
nature of JSON data to automate
command execution on HDFS
interface
Goal
• Designing a JSON interface to
control HDFS
SECTION TITLE
• Development of two modules:
- For writing into the system
- For reading from the system
Outcome
• User
can
specify
execution
commands directly in the JSON file
SECTION TITLE
along with data
• Only data gets stored into the
system
• Commands are deleted from the file
after execution
Sources and
Acknowledgements
Sources
• Dhurba Borthakur, Apache Hadoop
Developer, Facebook Data Infrastructure
• Matei Zaharia, Cloudera / Facebook / UC
Berkeley RAD Lab
SECTION TITLE
• Devaraj Das, Yahoo! Inc. Bangalore and
Apache Software Foundation
• HDFS Java API:
- http://hadoop.apache.org/core/docs/current/api/
• HDFS source code:
- http://hadoop.apache.org/core/version_control.html
Acknowledgements
• Professor Chris Mattmann for
guidance as and when reqired
SECTION TITLE
• Hossein (Farshad) Tajalli for his
continued
support
and
help
throughout the project
• All my classmates for providing
valuable inputs throughout the
work, especially through their
presentations
That’s All Folks!
SECTION TITLE
Download