Pig Data flow language (abstraction for MR jobs) B. Ramamurthy 5/28/2016

advertisement
Pig Data flow language
(abstraction for MR jobs)
B. Ramamurthy
5/28/2016
1
Abstraction layer for MR
• Raw MR is difficult to install and program (Do we know
about this? Then why did I ask you do this?)
• There are many models that simplify designing MR
applications:
–
–
–
–
–
MRJob for python developers
Elastic Map Reduce (EMR) from amazon aws
Pig from Apache via Yahoo
Hive from Apache via Facebook
And others
• We will look at Pig in detail. It is a data flow language,
so conceptually closer to the way we solve problems…
5/28/2016
2
References
• Pig ebook is available from our library:
• Alan Gates, Programming Pig, O’reilly Media,
2011. ISBN: 9781449317683
5/28/2016
3
Pig Data flow Language
• Pig data flow language describes a directed
acyclic graph (DAG) where edges are data
flows and nodes are operations.
• There are no if statements or for loop in pig,
since procedural language and object-oriented
languages describe control flow and data flow
is a side-effect.
• Pig focuses on data flow.
5/28/2016
4
What is Pig?: example
• Pig is a scripting language that helps in designing big data
solutions using high level primitives.
• Pig script can be executed locally; it is typically translated into
MR job/task workflow and executed on Hadoop
• Pig itself is a MR job on Hadoop
• You can access local file system using pig –x local (eg. file:/…)
• Other file system accessible are hdfs:// and s3:// from grunt>
of non-local pig
• You can transfer data into local file system from s3:
hadoop dfs –copyToLocal s3n://cse487/pig1/ps5.pig
/home/hadoop/pig1/ps5.pig
hadoop dfs –copyToLocal s3n://cse487/pig1/data2
/home/hadoop/pig1/data2
Then run ps5.pig in the local mode
pig –x local
5/28/2016
5
run ps5.pig
5/28/2016
6
Simple pig scripts: wordcount
A = load 'data2' as (line);
words = foreach A generate flatten(TOKENIZE(line)) as word;
grpd = group words by word;
count = foreach grpd generate group, COUNT(words);
store count into 'pig1out';
5/28/2016
7
Sample Pig script: simple data analysis
2
-2
3
-4
-7
3
4
3
5
5
4
4
5
4
6
7
6
5
A = LOAD 'data3' AS (x,y,z);
B = FILTER A by x> 0;
C = GROUP B BY x;
D = FOREACH C GENERATE group,COUNT(B);
STORE D INTO 'p6out';
5/28/2016
8
See the pattern?
•
•
•
•
LOAD
FILTER
GROUP
GENERATE (apply some function from
piggybank)
• STORE (DUMP for interactive debugging)
5/28/2016
9
Pig Latin
• Is the language pig script is written in.
• Is a parallel data flow language
• Mathematically pig latin describes a directed
acyclic graph (DAG) where edges are data flow
and the nodes are operators that process data
• It is data flow not control flow language: no if
statements and for loops! (traditional OO
programming describes control flow not data
flow.)
5/28/2016
10
Pig and query language
• How about Pig and SQL?
• SQL describes “what” or what is the user’s question and it
does NOT describes how it is to be solved.
• SQL is built around answering one question: lots of
subqueries and temporary tables resulting in one thing:
inverted process
– remember from our earlier discussions if these temp table are
NOT in-memory their random access is expensive
•
•
•
•
Pig describes the data pipeline from first step to final step.
HDFS vs RDBMS Tables
Pig vs Hive
Yahoo vs Facebook
5/28/2016
11
SQL (vs. Pig)
CREATE TEMP TABLE t1 AS
SELECT customer, sum(purchase) AS total_purchases
FROM transactions
GROUP BY customer;
SELECT customer, total_purchases,zipcode
FROM t1, customer_profile
WHERE t1.customer = customer_profile.customer;
5/28/2016
12
(SQL vs.) Pig
txns = load ‘transactions’ as (customer, purchase)
grouped = group txns customer;
total = foreach grouped generate group, SUM(txns.purchase) as tp;
profile = load ‘customer_profile’ as (customer, zipcode);
answer = join total by group, profile by customer;
dump answer;
5/28/2016
13
Pig and HDFS and MR
• Pig does not require HDFS.
• Pig can run on any file system as long as you
transfer the data flow and the data appropriately.
– This is great since you can use not just file:// or hdfs://
but also other systems to be developed in the future.
• Similarly Pig Latin has several advantages over
MR (see chapter 1 Programming Pig book) during
the conceptual phase.. For later execution on MR
5/28/2016
14
Uses of Pig
•
•
•
•
•
•
•
•
•
•
Traditional Extract, Transform, Load (ETL) data pipelines
Research on raw data
Iterative processing
Prototyping (debugging) on small data and local system before launching a
big data, multi-node MR jobs
Good for EDA!!
Largest use case: data pipelines: raw data , cleanse, load into data
warehouse
Ad-hoc queries from data where the scheme is unknown
What it is not good for? For workloads that will update a few records, the
will look up data in some random order, Pig is not a good choice.
In 2009, 50% yahoo! Jobs executed were using Pig.
Lets execute some Pig scripts on local installation and then on amazon
installation.
5/28/2016
15
Apache Pig
• Apache Pig is a platform for analyzing large data sets that consists
of a high-level language for expressing data analysis programs,
coupled with infrastructure for evaluating these programs.
• Pig's infrastructure layer consists of
– a compiler that produces sequences of Map-Reduce programs,
– Pig's language layer currently consists of a textual language called Pig
Latin, which has the following key properties:
• Ease of programming. It is trivial to achieve parallel execution of simple,
"embarrassingly parallel" data analysis tasks. Complex tasks comprised of
multiple interrelated data transformations are explicitly encoded as data flow
sequences, making them easy to write, understand, and maintain.
• Optimization opportunities. The way in which tasks are encoded permits the
system to optimize their execution automatically, allowing the user to focus on
semantics rather than efficiency.
• Extensibility. Users can create their own functions to do special-purpose
processing.
5/28/2016
16
Running Pig
• You can execute Pig Latin statements:
– Using grunt shell or command line
$ pig ... - Connecting to ...
grunt> A = load 'data';
grunt> B = ... ;
– In local mode or hadoop mapreduce mode
$ pig myscript.pig
Command Line - batch, local mode mode
$ pig -x local myscript.pig
– Either interactively or in batch
5/28/2016
17
Program/flow organization
• A LOAD statement reads data from the file
system.
• A series of "transformation" statements
process the data.
• A STORE statement writes output to the file
system; or, a DUMP statement displays output
to the screen.
5/28/2016
18
Interpretation
• In general, Pig processes Pig Latin statements as follows:
– First, Pig validates the syntax and semantics of all statements.
– Next, if Pig encounters a DUMP or STORE, Pig will execute the
statements.
A = LOAD 'student' USING PigStorage() AS (name:chararray, age:int,
gpa:float);
B = FOREACH A GENERATE name;
DUMP B;
(John)
(Mary)
(Bill)
(Joe)
• Store operator will store it in a file
5/28/2016
19
Simple Examples
A = LOAD 'input' AS (x, y, z);
B = FILTER A BY x > 5;
DUMP B;
C = FOREACH B GENERATE y, z;
STORE C INTO 'output';
----------------------------------------------------------------------------A = LOAD 'input' AS (x, y, z);
B = FILTER A BY x > 5;
STORE B INTO 'output1';
C = FOREACH B GENERATE y, z;
STORE C INTO 'output2'
5/28/2016
20
Lets run Pig Script on AWS
• See tutorial at
http://aws.amazon.com/articles/2729
• This is about parsing web log for frequent
external referrers, IPs, frequent terms etc.
• You place the data and pig script on s3
• Then start the pig workflow on aws
• The output also goes back into the s3 space
5/28/2016
21
Pig Script
register file:/home/hadoop/lib/pig/piggybank.jar
DEFINE EXTRACT org.apache.pig.piggybank.evaluation.string.EXTRACT();
RAW_LOGS = LOAD '$INPUT' USING TextLoader as (line:chararray);
LOGS_BASE = foreach RAW_LOGS generate
FLATTEN (
EXTRACT (line, '^(\\S+) (\\S+) (\\S+) \\[([\\w:/]+\\s[+\\-]\\d{4})\\] "(.+?)" (\\S+) (\\S+) "([^"]*)" "([^"]*)"')
)
as (
remoteAddr:chararray, remoteLogname:chararray, user:chararray, time:chararray,
request:chararray, status:int, bytes_string:chararray, referrer:chararray,
browser:chararray
)
;
REFERRER_ONLY = FOREACH LOGS_BASE GENERATE referrer;
FILTERED = FILTER REFERRER_ONLY BY referrer matches '.*bing.*' OR referrer matches '.*google.*';
SEARCH_TERMS = FOREACH FILTERED GENERATE FLATTEN(EXTRACT(referrer, '.*[&\\?]q=([^&]+).*')) as terms:chararray;
SEARCH_TERMS_FILTERED = FILTER SEARCH_TERMS BY NOT $0 IS NULL;
SEARCH_TERMS_COUNT = FOREACH (GROUP SEARCH_TERMS_FILTERED BY $0) GENERATE $0, COUNT($1) as num;
SEARCH_TERMS_COUNT_SORTED = LIMIT(ORDER SEARCH_TERMS_COUNT BY num DESC) 50;
STORE SEARCH_TERMS_COUNT_SORTED into '$OUTPUT';
5/28/2016
22
More examples from Cloudera
• http://www.cloudera.com/wpcontent/uploads/2010/01/IntroToPig.pdf
A very nice presentation from Cloudera…
• Also see Apache’s pig page:
• http://pig.apache.org/docs/r0.9.1/index.html
5/28/2016
23
Pig’s data model
• Scalar types: int, long, float (early versions, recently float has been
dropped), double, chararray, bytearray
• Complex types: Map, Tuple, Bag
• Map: chararray to any pig element; in fact , this <key> to <value> mapping;
map constants [‘name’#’bob’, ‘age’#55] will create a map with two keys
name and age, first value is chararray and the second value is an integer.
• Tuple: is a fixed length ordered collection of Pig data elements. Equivalent
to a row in SQL. Order, can refer to elements by field position. (‘bob’, 55) is
a tuple with two fields.
• Bag: unordered collection of tuples. Cannot reference tuple by position.
Eg. {(‘bob’,55), (‘sally’,52), (‘john’, 25)} is a bog with 3 tuples; bags may
become large and may spill into disk from “in-memory”
• Null: unknown, data missing; any data element can be null; (In Java it is
Null pointers… the meaning is different in Pig)
5/28/2016
24
Pig schema
•
•
•
•
•
Very relaxed wrt schema.
Scheme is defined at the time you load the data
Runtime declaration of schemas is really nice.
You can operate without meta data.
On the other hand, meta data can be stored in a
repository Hcatalog and used. For example JSON
format… etc.
• Gently typed: between Java and Perl at two
extremes
5/28/2016
25
Schema Definition
divs = load ‘NYSE_dividends’ as (exchange:chararray,
symbol:chararray, date:chararray, dividend:double);
Or if you are lazy
divs = load ‘NYSE_dividends’ as (exchange, symbol, date,
dividend);
But what if the data input is really complex? Eg. JSON objects?
One can keep a scheme in the HCatalog (apache incubation), a
meta data repository for facilitating reading/loading input
data in other formats.
divs = load ‘mydata’ using HCatLoader();
5/28/2016
26
Pig Latin
• Basics: keywords, relation names, field names;
• Keywords are not case sensitive but relation
and fields names are! User defined functions
are also case sensitive
• Comments /* */ or single line comment –
• Each processing step results in data
– Relation name = data operation
– Field names start with alphabet
5/28/2016
27
More examples
• No pig-schema
daily = load ‘NYSE_daily’;
calcs = foreach daily generate $7/100.0, SUBSTRING($0,0,1), $6-$3);
Here – is only numeric on Pig)
• No-schema filter
daily = load ‘NYSE_daily’;
fltrd = filter daily by $6 > $3;
Here > is allowed for numeric, bytearray or chararray.. Pig is going to guess the type!
• Math (float cast)
daily = load ‘NYSE_daily’ as (exchange, symbol, date, open, high:float,low:float, close,
volume:int, adj_close);
rough = foreach daily generate volume * close; -- will convert to float
Thus the free “typing” may result in unintended consequences.. Be aware. Pig is sometimes
stupid.
For a more in-depth view look at also how “casts” are done in Pig.
5/28/2016
28
Load (input method)
• Can easily interface to hbase: read from hbase
• using clause
– divs = load ‘NYSE_dividends’ using HBaseStorage();
– divs = load ‘NYSE_dividends’ using PigStorage();
– divs = load ‘NYSE_dividends’ using PigStorage(,);
• as clause
– daily = load ‘NYSE_daily’ as (exchange, symbol, date, open, high,low,
close, volume);
5/28/2016
29
Store & dump
• Default is PigStorage (it writes as tab separated)
– store processed into ‘/data/example/processed’;
• For comma separated use:
– store processed into ‘/data/example/processed’ using
PigStorage(,);
• Can write into hbase using HBaseStorage():
– store ‘processed’ using into HBaseStorage();
• Dump for interactive debugging, and prototyping
5/28/2016
30
Relational operations
• Allow you to transform by sorting, grouping,
joining, projecting and filtering
• foreach supports as array of expressions:
simplest is constants and field references.
rough = foreach daily generate volume * close;
calcs = foreach daily generate $7/100.0, SUBSTRING($0,0,1), $6-$3);
• UDF (User Defined Functions) can also be used in expressions
• Filter operation
CMsyms = filter divs by symbol matches ‘CM*’;
5/28/2016
31
Operations (cntd)
• Group operation collects together records with the
same key.
–
–
–
–
grpd = group daily by stock; -- output is <key, bag>
counts = foreach grpd generate group, COUNT(daily);
Can also group by multiple keys
grpd = group daily by (stock, exchange);
• Group forces the “reduce” phase of MR
• Pig offers mechanism for addressing data skew and
unbalanced use of reducers (we will not worry about
this now)
• Order by: strict ordering
• Maps, tuples, bags
5/28/2016
32
Download