Uploaded by Allen Boye

Distributed Processing 2024

advertisement
CIET 575: Advanced Big Data
Management
The Hadoop Distributed Processing
Ghana Communication Technology University
Lecturer – Dr. Forgor Lempogo
2024
CIET 575: Advanced Big Data Management - GCTU 2023 Delivery
Philosophy to Scale for Big Data Processing
Divide Work
Combine
Results
2
CIET 575: Advanced Big Data Management - GCTU 2023 Delivery
Distributed Processing
▪ How to assign tasks to different workers in an efficient
way?
▪ What happens if tasks fail?
▪ How do workers exchange results?
▪ How to synchronize distributed tasks allocated to
different workers?
3
CIET 575: Advanced Big Data Management - GCTU 2023 Delivery
Hadoop MapReduce
▪ MapReduce is a parallel programming model for writing
distributed applications devised at Google for efficient
processing of large amounts of data (multi-terabyte datasets).
▪ The large clusters (thousands of nodes) of commodity
hardware in a reliable, fault-tolerant manner.
▪ The MapReduce program runs on Hadoop which is an
Apache open-source framework.
4
CIET 575: Advanced Big Data Management - GCTU 2023 Delivery
MapReduce
▪ MapReduce Algorithm is mainly inspired by Functional Programming
model.
▪ MapReduce algorithm is mainly useful to process huge amount of
data in parallel, reliable and efficient way in cluster environments.
▪ It uses Divide and Conquer technique to process large amount of
data.
▪ It divides input task into smaller and manageable sub-tasks (They
should be executable independently) to execute them in-parallel
5
CIET 575: Advanced Big Data Management - GCTU 2023 Delivery
Components of MapReduce
6
CIET 575: Advanced Big Data Management - GCTU 2023 Delivery
Components of MapReduce
7
CIET 575: Advanced Big Data Management - GCTU 2023 Delivery
Working of MapReduce
▪ Hadoop Ecosystem component ‘MapReduce’ works by breaking
the processing into two phases:
▪ Map phase
▪ Reduce phase
▪ Each phase has key-value pairs as input and output.
▪ Additionally, programmer also specifies two functions:
▪ Map function
▪ Reduce function.
8
CIET 575: Advanced Big Data Management - GCTU 2023 Delivery
The Map Function
▪ A map function takes a collection of data and sorts and filters it
into a set of key-value pairs
▪ Map function takes a set of data and converts it into another set
of data, where individual elements are broken down into tuples
(key/value pairs).
▪ The map function is performed by a program called a mapper
9
CIET 575: Advanced Big Data Management - GCTU 2023 Delivery
The Reduce Function
▪ Reduce function takes the output from the Map function as an
input and combines those data tuples based on the key and
accordingly modifies the value of the key
▪ Thus, it summarizes the results of the map function into a single
result.
▪ The reduce function is performed by a program called a
Reducer
10
CIET 575: Advanced Big Data Management - GCTU 2023 Delivery
MapReduce
▪ The implementation of MapReduce complements the HDFS
structure:
▪ Job tracker is a central control program used to report on
MapReduce processing jobs
▪ Task tracker is a program responsible for running map and
reduce tasks on a node
▪ Batch processing runs tasks from beginning to end with no
user interaction
11
CIET 575: Advanced Big Data Management - GCTU 2023 Delivery
MapReduce
▪ MapReduce is a programming model for writing applications that
can process Big Data in parallel on multiple nodes.
▪ MapReduce provides analytical capabilities for analyzing huge
volumes of complex data.
12
CIET 575: Advanced Big Data Management - GCTU 2023 Delivery
Components of MapReduce
13
CIET 575: Advanced Big Data Management - GCTU 2023 Delivery
Algorithms using MapReduce
▪ The MapReduce algorithm contains two important tasks, namely
▪ Map and
▪ Reduce
▪ The map task is done by means of Mapper Class
▪ The reduce task is done by means of Reducer Class.
▪ Mapper class takes the input, tokenizes it, maps and sorts it.
▪ The output of Mapper class is used as input by Reducer class,
which in turn searches matching pairs and reduces them.
▪ MapReduce implements various mathematical algorithms to
divide a task into small parts and assign them to multiple
systems.
▪ In technical terms, MapReduce algorithm helps in sending the
Map & Reduce tasks to appropriate servers in a cluster.
14
CIET 575: Advanced Big Data Management - GCTU 2023 Delivery
MapReduce Flow
15
CIET 575: Advanced Big Data Management - GCTU 2023 Delivery
MapReduce – Word Count Problem
Example
▪ Problem Statement:
▪ Count the number of occurrences of each word available in a
DataSet
16
CIET 575: Advanced Big Data Management - GCTU 2023 Delivery
The Map and Reduce Functions
The Map Function
The Reduce Function
1. The framework will run multiple
instances of the map function on
different computers in the cluster.
1. The framework will collect output
records from all Map functions. It
will do a shaffle/sort and
assemble all the outputs into
Record for each key.
2. The framework will also attach
one input split with each map
function.
3. The framework will call the Map
function in the loop and pass one
record at a time from the split.
4. The output of the Map function is
a Key/Value pair, and it goes
back to Map Reduce framework.
2. Each record is then passed to the
Reduce function one at a time in
a loop.
3. The output of the Reduce
function is again a Key/Value pair
and it is written to HDFS in a file.
17
CIET 575: Advanced Big Data Management - GCTU 2023 Delivery
Features of MapReduce
▪ Simplicity – MapReduce jobs are easy to run. Applications can
be written in any language such as java, C++, and python.
▪ Scalability – MapReduce can process petabytes of data.
▪ Speed – By means of parallel processing problems that take
days to solve, it is solved in hours and minutes by MapReduce.
▪ Fault Tolerance – MapReduce takes care of failures. If one copy
of data is unavailable, another machine has a copy of the same
key pair which can be used for solving the same subtask.
18
CIET 575: Advanced Big Data Management - GCTU 2023 Delivery
YARN
▪ Hadoop YARN (Yet Another Resource Negotiator) is a Hadoop
ecosystem component that provides the resource management.
▪ Yarn is also one the most important component of Hadoop
Ecosystem.
▪ YARN is referred to as the operating system of Hadoop as it is
responsible for managing and monitoring workloads.
▪ It allows multiple data processing engines such as realtime
streaming and batch processing to handle data stored on a
single platform.
19
CIET 575: Advanced Big Data Management - GCTU 2023 Delivery
YARN
▪ It is like the operating system of Hadoop as it monitors and
manages the resources.
▪ Yarn came into the picture with the launch of Hadoop 2.x in order
to allow different workloads.
▪ It handles the workloads like stream processing, interactive
processing, and batch processing over a single platform.
▪ Yarn has two main components – Node Manager and Resource
Manager.
20
CIET 575: Advanced Big Data Management - GCTU 2023 Delivery
YARN
21
CIET 575: Advanced Big Data Management - GCTU 2023 Delivery
YARN
▪ Node Manager
▪ It is Yarn’s per-node agent and takes care of the individual compute nodes
in a Hadoop cluster.
▪ It monitors the resource usage like CPU, memory etc. of the local node and
intimates the same to Resource Manager.
▪ Resource Manager
▪ It is responsible for tracking the resources in the cluster and scheduling
tasks like mapreduce jobs.
▪ Also, there is Application Master and Scheduler in Yarn.
▪ • Also, there is Application Master and Scheduler in Yarn.
22
CIET 575: Advanced Big Data Management - GCTU 2023 Delivery
YARN – Resource Manager
23
CIET 575: Advanced Big Data Management - GCTU 2023 Delivery
YARN
▪ Application Master has two functions and they are:▪ Negotiating resources from Resource Manager
▪ Working with NodeManager to monitor and execute the sub-task.
▪ Following are the functions of Resource Scheduler:▪ It allocates resources to various running applications
▪ But it does not monitor the status of the application.
▪ So in the event of failure of the task, it does not restart the same.
▪ Another concept Container.
▪ It is nothing but a fraction of NodeManager capacity i.e. CPU, memory, disk, network
▪ etc.
24
CIET 575: Advanced Big Data Management - GCTU 2023 Delivery
Application workflow in Hadoop YARN
1. Client submits an application
2. The Resource Manager allocates a container to start the Application
Manager
3. The Application Manager registers itself with the Resource Manager
4. The Application Manager negotiates containers from the Resource
Manager
5. The Application Manager notifies the Node Manager to launch containers
6. Application code is executed in the container
7. Client contacts Resource Manager/Application Manager to monitor
application’s status
8. Once the processing is complete, the Application Manager un-registers with
the Resource Manager
25
CIET 575: Advanced Big Data Management - GCTU 2023 Delivery
Application workflow in Hadoop YARN
26
CIET 575: Advanced Big Data Management - GCTU 2023 Delivery
YARN
27
CIET 575: Advanced Big Data Management - GCTU 2023 Delivery
YARN
28
CIET 575: Advanced Big Data Management - GCTU 2023 Delivery
Components of YARN
29
CIET 575: Advanced Big Data Management - GCTU 2023 Delivery
30
CIET 575: Advanced Big Data Management - GCTU 2023 Delivery
Main features of YARN
▪ Main features of YARN are:
▪ Flexibility
▪ Enables other purpose-built data processing models beyond
MapReduce (batch), such as interactive and streaming.
▪ Due to this feature of YARN, other applications can also be run
along with Map Reduce programs in Hadoop2.
▪ Efficiency
▪ As many applications run on the same cluster, hence, efficiency of
Hadoop increases without much effect on quality of service.
▪ Shared
▪ Provides a stable, reliable, secure foundation and shared
operational services across multiple workloads.
▪ Additional programming models such as graph processing and
iterative modelling are now possible for data processing.
31
CIET 575: Advanced Big Data Management - GCTU 2023 Delivery
32
CIET 575: Advanced Big Data Management - GCTU 2023 Delivery
Hive
▪ The Hadoop ecosystem component, Apache Hive, is an open-source
data warehouse system for querying and analyzing large datasets
stored in Hadoop files.
▪ Hive do three main functions:
▪ Data summarization
▪ Query
▪ Analysis.
▪ Hive use language called HiveQL (HQL), which is similar to SQL.
▪ HiveQL automatically translates SQL-like queries into MapReduce
jobs which will execute on Hadoop
33
CIET 575: Advanced Big Data Management - GCTU 2023 Delivery
Hive - Main Parts
34
CIET 575: Advanced Big Data Management - GCTU 2023 Delivery
Hive - Main Parts
▪ Main parts of Hive are:
▪ Metastore – It stores the metadata.
▪ Driver – Manage the lifecycle of a HiveQL statement.
▪ Query compiler – Compiles HiveQL into Directed Acyclic Graph
(DAG).
▪ Hive server – Provide a thrift interface and JDBC/ODBC server.
35
CIET 575: Advanced Big Data Management - GCTU 2023 Delivery
Pig
▪ Apache Pig is a high-level language platform for analyzing and
querying huge dataset that are stored in HDFS.
▪ Pig as a component of Hadoop Ecosystem uses Pig Latin language.
It is very similar to SQL.
▪ It loads the data, applies the required filters and dumps the data in
the required format.
▪ For Programs execution, pig requires Java runtime environment.
36
CIET 575: Advanced Big Data Management - GCTU 2023 Delivery
Pig - Main Parts
37
CIET 575: Advanced Big Data Management - GCTU 2023 Delivery
Pig - Features
▪ Features of Apache Pig:
▪ Extensibility – For carrying out special purpose processing,
users can create their own function.
▪ Optimization opportunities – Pig allows the system to optimize
automatic execution. This allows the user to pay attention to
semantics instead of efficiency.
▪ Handles all kinds of data – Pig analyzes both structured as well
as unstructured.
38
CIET 575: Advanced Big Data Management - GCTU 2023 Delivery
HBase
▪ Apache HBase is a Hadoop ecosystem component which is a
distributed database that was designed to store structured data in
tables that could have billions of row and millions of columns.
▪ HBase is scalable, distributed, and NoSQL database that is built on
top of HDFS.
▪ HBase, provide real-time access to read or write data in HDFS.
39
CIET 575: Advanced Big Data Management - GCTU 2023 Delivery
Hbase - Architecture
40
CIET 575: Advanced Big Data Management - GCTU 2023 Delivery
Components of Hbase
▪ There are two HBase Components namely:
1. HBase Master
▪ It is not part of the actual data storage but negotiates load
balancing across all RegionServer.
▪ Maintain and monitor the Hadoop cluster.
▪ Performs administration (interface for creating, updating and
deleting tables.)
▪ Controls the failover.
▪ HMaster handles DDL operation.
2. RegionServer
▪ Region server process runs on every node in Hadoop cluster.
Region serveIt is the worker node which handles read, writes,
updates and delete requests from
▪ clients. r runs on
▪ HDFS DateNode.
41
CIET 575: Advanced Big Data Management - GCTU 2023 Delivery
Hadoop Streaming
▪ Hadoop provides an API to MapReduce that allows writing map and
reducing functions in languages other than Java.
▪ Hadoop Streaming uses Unix standard streams as the interface between
Hadoop and the program,
▪ Any language can be used that can read standard input and write to standard output to write
the MapReduce program.
▪ Streaming is naturally suited for text processing and when used in text
mode, it has a line-oriented view of data.
42
CIET 575: Advanced Big Data Management - GCTU 2023 Delivery
Hadoop Streaming
▪ Map input data is passed over standard input to the map function, which
processes it line by line and writes lines to standard output.
▪ A map output key-value pair is written as a single tab-delimited line.
▪ Input to the reduce function is in the same format—a tab-separated keyvalue pair—passed over standard input.
▪ The reduce function reads lines from standard input, which the framework
guarantees are sorted by key, and writes its results to standard output.
43
CIET 575: Advanced Big Data Management - GCTU 2023 Delivery
Q&A
❑ Any Questions
44
Download