Database and MapReduce Based on slides from Jimmy Lin’s lecture slides (http://www.umiacs.umd.edu/~jimmylin/clou d-2010-Spring/index.html) (licensed under Creation Commons Attribution 3.0 License) Limitations of Existing Data Analytics Architecture BI Reports + Interactive Apps RDBMS (aggregated data) Can’t Explore Original High Fidelity Raw Data ETL Compute Grid Moving Data To Compute Doesn’t Scale Storage Only Grid (original raw data) Mostly Append Collection Archiving = Premature Data Death Instrumentation 2 ©2011 Cloudera, Inc. All Rights Reserved. Slides from Dr. Amr Awadallah’s Hadoop talk at Stanford, CTO & VPE from Cloudera Working Scenario • Two tables: – User demographics (gender, age, income, etc.) – User page visits (URL, time spent, etc.) • Analyses we might want to perform: – – – – – Statistics on demographic characteristics Statistics on page visits Statistics on page visits by URL Statistics on page visits by demographic characteristic … Relational Algebra • Primitives – – – – – – Projection () Selection () Cartesian product () Set union () Set difference () Rename () • Other operations – Join (⋈) – Group by… aggregation –… Design Pattern: Secondary Sorting • MapReduce sorts input to reducers by key – Values are arbitrarily ordered • What if want to sort value also? – E.g., k → (v1, r), (v3, r), (v4, r), (v8, r)… Secondary Sorting: Solutions • Solution 1: – Buffer values in memory, then sort – Why is this a bad idea? • Solution 2: – “Value-to-key conversion” design pattern: form composite intermediate key, (k, v1) – Let execution framework do the sorting – Preserve state across multiple key-value pairs to handle processing – Anything else we need to do? Value-to-Key Conversion Before k → (v1, r), (v4, r), (v8, r), (v3, r)… Values arrive in arbitrary order… After (k, v1) → (v1, r) (k, v3) → (v3, r) (k, v4) → (v4, r) (k, v8) → (v8, r) … Values arrive in sorted order… Process by preserving state across multiple keys Remember to partition correctly! Projection R1 R1 R2 R2 R3 R4 R5 R3 R4 R5 Projection in MapReduce • Easy! – Map over tuples, emit new tuples with appropriate attributes – No reducers, unless for regrouping or resorting tuples – Alternatively: perform in reducer, after some other processing • Basically limited by HDFS streaming speeds – Speed of encoding/decoding tuples becomes important – Relational databases take advantage of compression – Semistructured data? No problem! Selection R1 R2 R3 R4 R5 R1 R3 Selection in MapReduce • Easy! – Map over tuples, emit only tuples that meet criteria – No reducers, unless for regrouping or resorting tuples – Alternatively: perform in reducer, after some other processing • Basically limited by HDFS streaming speeds – Speed of encoding/decoding tuples becomes important – Relational databases take advantage of compression – Semistructured data? No problem! Group by… Aggregation • Example: What is the average time spent per URL? • In SQL: – SELECT url, AVG(time) FROM visits GROUP BY url • In MapReduce: – Map over tuples, emit time, keyed by url – Framework automatically groups values by keys – Compute average in reducer – Optimize with combiners Relational Joins Source: Microsoft Office Clip Art Relational Joins R1 S1 R2 S2 R3 S3 R4 S4 R1 S2 R2 S4 R3 S1 R4 S3 Natural Join Operation – Example • Relations r, s: A B C D B D E 1 2 4 1 2 a a b a b 1 3 1 2 3 a a a b b r r s s A B C D E 1 1 1 1 2 a a a a b Natural Join Example sid bid day 22 101 10/10/96 58 103 11/12/96 sid 22 31 58 sname rating age dustin 7 45.0 lubber 8 55.5 rusty 10 35.0 R1 R1 S1 S1 = sid sname rating age bid day 22 58 dustin rusty 101 103 10/10/96 11/12/96 7 10 45.0 35.0 Types of Relationships Many-to-Many One-to-Many One-to-One Join Algorithms in MapReduce • Reduce-side join • Map-side join • In-memory join – Striped variant – Memcached variant Reduce-side Join • Basic idea: group by join key – Map over both sets of tuples – Emit tuple as value with join key as the intermediate key – Execution framework brings together tuples sharing the same key – Perform actual join in reducer – Similar to a “sort-merge join” in database terminology • Two variants – 1-to-1 joins – 1-to-many and many-to-many joins Reduce-side Join: 1-to-1 Map keys values R1 R1 R4 R4 S2 S2 S3 S3 Reduce keys values R1 S2 S3 R4 Note: no guarantee if R is going to come first or S Reduce-side Join: 1-to-many Map keys values R1 R1 S2 S2 S3 S3 S9 S9 Reduce keys values R1 S2 S3 … Reduce-side Join: V-to-K Conversion In reducer… keys values R1 S2 New key encountered: hold in memory Cross with records from other set S3 S9 R4 S3 S7 New key encountered: hold in memory Cross with records from other set Reduce-side Join: many-to-many In reducer… keys values R1 R5 Hold in memory R8 S2 S3 S9 Cross with records from other set Map-side Join: Basic Idea Assume two datasets are sorted by the join key: R1 S2 R2 S4 R4 S3 R3 S1 A sequential scan through both datasets to join (called a “merge join” in database terminology) Map-side Join: Parallel Scans • If datasets are sorted by join key, join can be accomplished by a scan over both datasets • How can we accomplish this in parallel? – Partition and sort both datasets in the same manner • In MapReduce: – Map over one dataset, read from other corresponding partition – No reducers necessary (unless to repartition or resort) • Consistently partitioned datasets: realistic to expect? In-Memory Join • Basic idea: load one dataset into memory, stream over other dataset – Works if R << S and R fits into memory – Called a “hash join” in database terminology • MapReduce implementation – Distribute R to all nodes – Map over S, each mapper loads R in memory, hashed by join key – For every tuple in S, look up join key in R – No reducers, unless for regrouping or resorting tuples In-Memory Join: Variants • Striped variant: – – – – R too big to fit into memory? Divide R into R1, R2, R3, … s.t. each Rn fits into memory Perform in-memory join: n, Rn ⋈ S Take the union of all join results • Memcached join: – Load R into memcached – Replace in-memory hash lookup with memcached lookup Memcached Caching servers: 15 million requests per second, 95% handled by memcache (15 TB of RAM) Database layer: 800 eight-core Linux servers running MySQL (40 TB user data) Source: Technology Review (July/August, 2008) Memcached Join • Memcached join: – Load R into memcached – Replace in-memory hash lookup with memcached lookup • Capacity and scalability? – Memcached capacity >> RAM of individual node – Memcached scales out with cluster • Latency? – Memcached is fast (basically, speed of network) – Batch requests to amortize latency costs Source: See tech report by Lin et al. (2009) Which join to use? • In-memory join > map-side join > reduce-side join – Why? • Limitations of each? – In-memory join: memory – Map-side join: sort order and partitioning – Reduce-side join: general purpose Processing Relational Data: Summary • MapReduce algorithms for processing relational data: – Group by, sorting, partitioning are handled automatically by shuffle/sort in MapReduce – Selection, projection, and other computations (e.g., aggregation), are performed either in mapper or reducer – Multiple strategies for relational joins • Complex operations require multiple MapReduce jobs – Example: top ten URLs in terms of average time spent – Opportunities for automatic optimization Evolving roles for relational database and MapReduce Limitations of Existing Data Analytics Architecture BI Reports + Interactive Apps RDBMS (aggregated data) Can’t Explore Original High Fidelity Raw Data ETL Compute Grid Moving Data To Compute Doesn’t Scale Storage Only Grid (original raw data) Mostly Append Collection Archiving = Premature Data Death Instrumentation 2 ©2011 Cloudera, Inc. All Rights Reserved. Slides from Dr. Amr Awadallah’s Hadoop talk at Stanford, CTO & VPE from Cloudera OLTP/OLAP/Hadoop Architecture ETL (Extract, Transform, and Load) OLTP Hadoop OLAP Hive and Pig Need for High-Level Languages • Hadoop is great for large-data processing! – But writing Java programs for everything is verbose and slow – Analysts don’t want to (or can’t) write Java • Solution: develop higher-level data processing languages – Hive: HQL is like SQL – Pig: Pig Latin is a bit like Perl Hive and Pig • Hive: data warehousing application in Hadoop – Query language is HQL, variant of SQL – Tables stored on HDFS as flat files – Developed by Facebook, now open source • Pig: large-scale data processing system – Scripts are written in Pig Latin, a dataflow language – Developed by Yahoo!, now open source – Roughly 1/3 of all Yahoo! internal jobs • Common idea: – Provide higher-level language to facilitate large-data processing – Higher-level language “compiles down” to Hadoop jobs Hive: Background • Started at Facebook • Data was collected by nightly cron jobs into Oracle DB • “ETL” via hand-coded python • Grew from 10s of GBs (2006) to 1 TB/day new data (2007), now 10x that Source: cc-licensed slide by Cloudera Hive Components • • • • Shell: allows interactive queries Driver: session handles, fetch, execute Compiler: parse, plan, optimize Execution engine: DAG of stages (MR, HDFS, metadata) • Metastore: schema, location in HDFS, SerDe Source: cc-licensed slide by Cloudera Data Model • Tables – Typed columns (int, float, string, boolean) – Also, list: map (for JSON-like data) • Partitions – For example, range-partition tables by date • Buckets – Hash partitions within ranges (useful for sampling, join optimization) Source: cc-licensed slide by Cloudera Metastore • Database: namespace containing a set of tables • Holds table definitions (column types, physical layout) • Holds partitioning information • Can be stored in Derby, MySQL, and many other relational databases Source: cc-licensed slide by Cloudera Physical Layout • Warehouse directory in HDFS – E.g., /user/hive/warehouse • Tables stored in subdirectories of warehouse – Partitions form subdirectories of tables • Actual data stored in flat files – Control char-delimited text, or SequenceFiles – With custom SerDe, can use arbitrary format Source: cc-licensed slide by Cloudera Hive: Example Hive looks similar to an SQL database Relational join on two tables: Table of word counts from Shakespeare collection Table of word counts from the bible SELECT s.word, s.freq, k.freq FROM shakespeare s JOIN bible k ON (s.word = k.word) WHERE s.freq >= 1 AND k.freq >= 1 ORDER BY s.freq DESC LIMIT 10; the I and to of a you my in is 25848 23031 19671 18038 16700 14170 12702 11297 10797 8882 Source: Material drawn from Cloudera training VM 62394 8854 38985 13526 34654 8057 2720 4135 12445 6884 Hive: Behind the Scenes SELECT s.word, s.freq, k.freq FROM shakespeare s JOIN bible k ON (s.word = k.word) WHERE s.freq >= 1 AND k.freq >= 1 ORDER BY s.freq DESC LIMIT 10; (Abstract Syntax Tree) (TOK_QUERY (TOK_FROM (TOK_JOIN (TOK_TABREF shakespeare s) (TOK_TABREF bible k) (= (. (TOK_TABLE_OR_COL s) word) (. (TOK_TABLE_OR_COL k) word)))) (TOK_INSERT (TOK_DESTINATION (TOK_DIR TOK_TMP_FILE)) (TOK_SELECT (TOK_SELEXPR (. (TOK_TABLE_OR_COL s) word)) (TOK_SELEXPR (. (TOK_TABLE_OR_COL s) freq)) (TOK_SELEXPR (. (TOK_TABLE_OR_COL k) freq))) (TOK_WHERE (AND (>= (. (TOK_TABLE_OR_COL s) freq) 1) (>= (. (TOK_TABLE_OR_COL k) freq) 1))) (TOK_ORDERBY (TOK_TABSORTCOLNAMEDESC (. (TOK_TABLE_OR_COL s) freq))) (TOK_LIMIT 10))) (one or more of MapReduce jobs) Hive: Behind the Scenes STAGE DEPENDENCIES: Stage-1 is a root stage Stage-2 depends on stages: Stage-1 Stage-0 is a root stage STAGE PLANS: Stage: Stage-1 Map Reduce Alias -> Map Operator Tree: s TableScan alias: s Filter Operator predicate: expr: (freq >= 1) type: boolean Reduce Output Operator key expressions: expr: word type: string sort order: + Map-reduce partition columns: expr: word type: string tag: 0 value expressions: expr: freq type: int expr: word type: string k TableScan alias: k Filter Operator predicate: expr: (freq >= 1) type: boolean Reduce Output Operator key expressions: expr: word type: string sort order: + Map-reduce partition columns: expr: word type: string tag: 1 value expressions: expr: freq type: int Stage: Stage-2 Map Reduce Alias -> Map Operator Tree: hdfs://localhost:8022/tmp/hive-training/364214370/10002 Reduce Output Operator key expressions: expr: _col1 type: int sort order: tag: -1 value expressions: expr: _col0 type: string expr: _col1 type: int expr: _col2 type: int Reduce Operator Tree: Extract Limit File Output Operator compressed: false GlobalTableId: 0 table: input format: org.apache.hadoop.mapred.TextInputFormat output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat Reduce Operator Tree: Join Operator condition map: Inner Join 0 to 1 condition expressions: 0 {VALUE._col0} {VALUE._col1} 1 {VALUE._col0} outputColumnNames: _col0, _col1, _col2 Filter Operator predicate: Stage: Stage-0 expr: ((_col0 >= 1) and (_col2 >= 1)) Fetch Operator type: boolean limit: 10 Select Operator expressions: expr: _col1 type: string expr: _col0 type: int expr: _col2 type: int outputColumnNames: _col0, _col1, _col2 File Output Operator compressed: false GlobalTableId: 0 table: input format: org.apache.hadoop.mapred.SequenceFileInputFormat output format: org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat Example Data Analysis Task Find users who tend to visit “good” pages. Visits Pages url time url Amy www.cnn.com 8:00 www.cnn.com 0.9 Amy www.crap.com 8:05 www.flickr.com 0.9 Amy www.myblog.com 10:00 www.myblog.com 0.7 Amy www.flickr.com 10:05 www.crap.com 0.2 Fred cnn.com/index.htm 12:00 ... Pig Slides adapted from Olston et al. pagerank ... user Conceptual Dataflow Load Visits(user, url, time) Load Pages(url, pagerank) Canonicalize URLs Join url = url Group by user Compute Average Pagerank Filter avgPR > 0.5 Pig Slides adapted from Olston et al. Pig Latin Script Visits = load ‘/data/visits’ as (user, url, time); Visits = foreach Visits generate user, Canonicalize(url), time; Pages = load ‘/data/pages’ as (url, pagerank); VP = join Visits by url, Pages by url; UserVisits = group VP by user; UserPageranks = foreach UserVisits generate user, AVG(VP.pagerank) as avgpr; GoodUsers = filter UserPageranks by avgpr > ‘0.5’; store GoodUsers into '/data/good_users'; Pig Slides adapted from Olston et al. System-Level Dataflow Visits load Pages ... ... load canonicalize join by url ... group by user ... the answer Pig Slides adapted from Olston et al. compute average pagerank filter MapReduce Code import import import import java.io.IOException; java.util.ArrayList; java.util.Iterator; java.util.List; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.LongWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.io.Writable; im p o r t o r g . a p a c h e . h a d o o p . i o . W r i t a b l e C o m p a r a b l e ; import org.apache.hadoop.mapred.FileInputFormat; import org.apache.hadoop.mapred.FileOutputFormat; import org.apache.hadoop.mapred.JobConf; import org.apache.hadoop.mapred.KeyValueTextInputFormat; i m p o r t o r gp.aac h e . h a d o o p . m a p r e d . M a p p e r ; import org.apache.hadoop.mapred.MapReduceBase; import org.apache.hadoop.mapred.OutputCollector; import org.apache.hadoop.mapred.RecordReader; import org.apache.hadoop.mapred.Reducer; import org.apache.hadoop.mapred.Reporter; i m po r t o r g . a p a c h e . h a d o o p . m a p r e d . S e q u e n c e F i l e I n p u t F o r m a t ; import org.apache.hadoop.mapred.SequenceFileOutputFormat; import org.apache.hadoop.mapred.TextInputFormat; import org.apache.hadoop.mapred.jobcontrol.Job; i m p o r t o r g . a p a c h e . h a d o o p . m a p r e d . j o b c o notnrtorlo.lJ;o b C import org.apache.hadoop.mapred.lib.IdentityMapper; reporter.setStatus("OK"); lp.setOutputKeyClass(Text.class); lp.setOutputValueClass(Text.class); lp.setMapperClass(LoadPages.class); FileInputFormat.addInputPath(lp, new P a t h ( "u/s e r / g a t e s / p a g e s " ) ) ; FileOutputFormat.setOutputPath(lp, s2; new Path("/user/gates/tmp/indexed_pages")); lp.setNumReduceTasks(0); Job loadPages = new Job(lp); } // Do the cross product and collect the values for (String s1 : first) { for (String s2 : second) { String outval = key + "," + s1 + "," + oc.collect(null, new Text(outval)); reporter.setStatus("OK"); } } } } public static class LoadJoined extends MapReduceBase implements Mapper<Text, Text, Text, LongWritable> public { JobConf lfu = new JobConf(MRExample.class); l f ue.tsJ o b N a m e ( " L o a d a n d F i l t e r U s e r s " ) ; lfu.setInputFormat(TextInputFormat.class); lfu.setOutputKeyClass(Text.class); lfu.setOutputValueClass(Text.class); lfu.setMapperClass(LoadAndFilterUsers.class); F i l e I n p u t F o r m aItn.paudtdP a t h ( l f u , n e w Path("/user/gates/users")); FileOutputFormat.setOutputPath(lfu, new Path("/user/gates/tmp/filtered_users")); lfu.setNumReduceTasks(0); Job loadUsers = new Job(lfu); void map( Text k, Text val, O u t p u tcCtoolrl<eT e x t , L o n g W r i t a b l e > o c , Reporter reporter) throws IOException { // Find the url String line = val.toString(); int firstComma = line.indexOf(','); i n t s e c o n d C o m m a = l i n e . i n d e x O f (C'o,m'm,a )f;i r s t String key = line.substring(firstComma, secondComma); // drop the rest of the record, I don't need it anymore, // just pass a 1 for the combiner/reducer to sum instead. Text outKey = new Text(key); oc.collect(outKey, new LongWritable(1L)); JobConf join = new Jo Mb RC Eo xn af m( ple.class); join.setJobName("Join Users and Pages"); join.setInputFormat(KeyValueTextInputFormat.class) join.setOutputKeyClass(Text.class); public class MRExample { join.setOutputValueClass(Text.class); public static class LoadPages extends MapReduceBase join.setMapperClass(Iden pt ei rt .y cM la ap ss); implements Mapper<LongWritable, Text, Text, Text> { join.setReducerClass(Join.class); } FileInputFormat.addInputPath(join, new public void map(LongWritable k, Text val, } Path("/user/gates/tmp/indexed_pages")); OutputCollector<Text, Text> oc, public static class ReduceUrls extends MapReduceBase FileInputFormat.addInputPath(join, new Reporter reporter) throws IOException { implements Reducer<Text, LongWritable, WritableComparab Pl ae t, h("/user/gates/tmp/filtered_users")); // Pull the key out Writable> { F i l e O u t p u t F o r mtaOtu.tspeu t P a t h ( j o i n , n e w String line = val.toString(); Path("/user/gates/tmp/joined")); int firstComma = line.indexOf(','); public void reduce( join.setNumReduceTasks(50); S t r i n g k e y = l isnter.isnugb( 0 , f i r s t C o m m a ) ; T e xyt, k e Job joinJob = new Job(join); String value = line.substring(firstComma + 1); Iterator<LongWritable> iter, joinJob.addDependingJob(loadPages); Text outKey = new Text(key); OutputCollector<WritableComparable, Writable> oc, joinJob.addDependingJob(loadUsers); // Prepend an index to the value so we know which file Reporter reporter) throws IOException { // it came from. // Add up all the values we see JobConf group = new JobC xo an mf p( lM eR .E class); T e x t o u t V a l = n e w "T e+x tv(a"l1u e ) ; group.setJobName("Group URLs"); oc.collect(outKey, outVal); long sum = 0; group.setInputFormat(KeyValueTextInputFormat.class } iw lh e (iter.hasNext()) { group.setOutputKeyClass(Text.class); } sum += iter.next().get(); group.setOutputValueClass(LongWritable.class); public static class LoadAndFilterUsers extends MapReduceBase reporter.setStatus("OK"); group.setOutputFormat(Seq lu ee On uc te pF ui tFormat.class); implements Mapper<LongWritable, Text, Text, Text> { } group.setMapperClass(LoadJoined.class); group.setCombinerClass(ReduceUrls.class); public void map(LongWritable k, Text val, oc.collect(key, new LongWritable(sum)); group.setReducerClass(ReduceUrls.class); OutputCollector<Text, Text> oc, } FileInputFormat.addInputPath(group, new Reporter reporter) throws IOException { } Path("/user/gates/tmp/joined")); // Pull the key out public static class LoadClicks extends MapReduceBase FileOutputFormat.setOutputPath(group, new String line = val.toString(); mi p l e m e n t s M a p p e r < W r i t a b l e C o m p a r a b l e , W r i t a b l e , L o n g W r i tP aa bt lh e( ," / u s e r / g a t e s / t m p / g r o u p e d " ) ) ; int firstComma = line.indexOf(','); Text> { group.setNumReduceTasks(50); S t r i n g v a l u e = l i n e . s ufbisrtsrtiCnogm(m a + 1 ) ; Job groupJob = new Job(group); int age = Integer.parseInt(value); public void map( groupJob.addDependingJob(joinJob); if (age < 18 || age > 25) return; WritableComparable key, String key = line.substring(0, firstComma); Writable val, JobConf top100 = new JobConf(MRExample.class); Text outKey = new Text(key); OutputCollector<LongWritable, Text> oc, top100.setJobName("Top 100 sites"); // Prepend an index to the v ea l ku ne o ws o w hw ich file Reporter repo tr ht re or w) s IOException { top100.setInputFormat(SequenceFileInputFormat.clas // it came from. oc.collect((LongWritable)val, (Text)key); top100.setOutputKeyClass(LongWritable.class); Text outVal = new Text("2" + value); } top100.setOutputValueClass(Text.class); oc.collect(outKey, outVal); } top100.setOutputFormat(SequenceFi ol re mO au tt .p cu lt aF ss); } public static class LimitClicks extends MapReduceBase top100.setMapperClass(LoadClicks.class); } implements Reducer<LongWritable, Text, LongWritable, Text> { top100.setCombinerClass(LimitClicks.class); public static class Join extends MapReduceBase top100.setReducerClass(LimitClicks.class); implements Reducer<Text, Text, Text, Text> { int count = 0; FileInputFormat.addInputPath(top100, new publi vc oid reduce( Path("/user/gates/tmp/grouped")); public void reduce(Text key, LongWritable key, FileOutputFormat.setOutputPath(top100, new Iterator<Text> iter, Iterator<Text> iter, Path("/user/gates/top100sitesforusers18to25")); OutputCollector<Text, Text> oc, OutputCollector<LongWritable, Text> oc, top100.setNumReduceTasks(1); Reporter reporter) throws IOException { Reporter reporter) throws IOException { Job limit = new Job(top100); // For each value, figure out which file it's from and limit.addDependingJob(groupJob); store it // Only output the first 100 records // accordingly. while (co <un 1t 00 && iter.hasNext()) { J o b C o n t r o l j c = n e w J o b C o n t r o l ( "1F0i0n ds ittoeps f o r u s e r s List<String> first = new ArrayList<String>(); oc.collect(key, iter.next()); 18 to 25"); List<String> second = new ArrayList<String>(); count++; jc.addJob(loadPages); } jc.addJob(loadUsers); while (iter.hasNext()) { } jc.addJob(joinJob); Text t = iter.next(); } jc.addJob(groupJob); String valueS= trt i. nt go (); public static void main(String[] args) throws IOException { jc.addJob(limit); if (value.charAt(0) == '1') JobConf lp = new JobConf(MRExample.class); jc.run(); first.add(value.substring(1)); l p .tsJeo b N a m e ( " L o a d P a g e s " ) ; } else second.add(value.substring(1)); lp.setInputFormat(TextInputFormat.class); } Pig Slides adapted from Olston et al. Java vs. Pig Latin 1/20 the lines of code 1/16 the development time 300 180 160 140 120 100 80 60 40 20 0 Minutes 250 200 150 100 50 0 Hadoop Pig Hadoop Performance on par with raw Hadoop! Pig Slides adapted from Olston et al. Pig Pig takes care of… Schema and type checking Translating into efficient physical dataflow Exploiting data reduction opportunities (e.g., early partial aggregation via a combiner) Executing the system-level dataflow (i.e., sequence of one or more MapReduce jobs) (i.e., running the MapReduce jobs) Tracking progress, errors, etc.