- 快召唤伙伴们来围观吧
- 微博 QQ QQ空间 贴吧
- 文档嵌入链接
- 复制
- 微信扫一扫分享
- 已成功复制到剪贴板
Large-Scale Data Processing with MapReduce
展开查看详情
1 .Large-Scale Data Processing with MapReduce AAAI 2011 Tutorial Jimmy Lin University of Maryland Sunday, August 7, 2011 This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States See http://creativecommons.org/licenses/by-nc-sa/3.0/us/ for details These slides are available on my homepage at http :// www.umiacs.umd.edu/~jimmylin /
2 .First things first… About me Course history Audience survey
3 .Agenda Setting the stage: Why large data? Why is this different? Introduction to MapReduce MapReduce algorithm design Text retrieval Managing relational data Graph algorithms Beyond MapReduce
4 .Expectations Focus on “thinking at scale” Deconstruction into “design patterns” Basic intuitions, not fancy math Mapping well-known algorithms to MapReduce Not a tutorial on programming Hadoop Entry point to book
5 .Setting the Stage: Why large data? Setting the stage Introduction to MapReduce MapReduce algorithm design Text retrieval Managing relational data Graph algorithms Beyond MapReduce
6 .Source: Wikipedia (Everest)
7 .How much data? 6.5 PB of user data + 50 TB/day (5/2009) processes 20 PB a day (2008) 36 PB of user data + 80-90 TB/day (6/2010) Wayback Machine: 3 PB + 100 TB/month (3/2009) LHC: 15 PB a year (any day now) LSST: 6-10 PB a year (~2015) 640K ought to be enough for anybody.
8 .No data like more data! ( Banko and Brill, ACL 2001) ( Brants et al., EMNLP 2007) s/knowledge/data/g; How do we get here if we’re not Google?
9 .+ simple, distributed programming models cheap commodity clusters = data-intensive computing for the masses!
10 .Setting the Stage: Why is this different? Setting the stage Introduction to MapReduce MapReduce algorithm design Text retrieval Managing relational data Graph algorithms Beyond MapReduce
11 .Parallel computing is hard! Message Passing P 1 P 2 P 3 P 4 P 5 Shared Memory P 1 P 2 P 3 P 4 P 5 Memory Different programming models Different programming constructs mutexes, conditional variables, barriers, … masters/slaves, producers/consumers, work queues, … Fundamental issues scheduling, data distribution, synchronization, inter-process communication, robustness, fault tolerance, … Common problems livelock, deadlock, data starvation, priority inversion… dining philosophers, sleeping barbers, cigarette smokers, … Architectural issues Flynn’s taxonomy (SIMD, MIMD, etc.), network typology, bisection bandwidth UMA vs. NUMA, cache coherence The reality: programmer shoulders the burden of managing concurrency… (I want my students developing new algorithms, not debugging race conditions) master slaves producer consumer producer consumer work queue
12 .Where the rubber meets the road Concurrency is difficult to reason about At the scale of datacenters (even across datacenters) In the presence of failures In terms of multiple interacting services The reality: Lots of one-off solutions, custom code Write you own dedicated library, then program with it Burden on the programmer to explicitly manage everything
13 .Source: Ricardo Guimarães Herrmann
14 .Source: NY Times (6/14/2006) The datacenter is the computer! I think there is a world market for about five computers.
15 .What’s the point? It’s all about the right level of abstraction Hide system-level details from the developers No more race conditions, lock contention, etc. Separating the what from how Developer specifies the computation that needs to be performed Execution framework (“runtime”) handles actual execution The datacenter is the computer!
16 .“Big Ideas” Scale “out”, not “up” Limits of SMP and large shared-memory machines Move processing to the data Cluster have limited bandwidth Process data sequentially, avoid random access Seeks are expensive, disk throughput is reasonable Seamless scalability From the mythical man-month to the tradable machine-hour
17 .Introduction to MapReduce Setting the stage Introduction to MapReduce MapReduce algorithm design Text retrieval Managing relational data Graph algorithms Beyond MapReduce
18 .Typical Large-Data Problem Iterate over a large number of records Extract something of interest from each Shuffle and sort intermediate results Aggregate intermediate results Generate final output Key idea: provide a functional abstraction for these two operations Map Reduce (Dean and Ghemawat , OSDI 2004)
19 .g g g g g f f f f f Map Fold Roots in Functional Programming
20 .MapReduce Programmers specify two functions: map (k, v) → <k’, v’>* reduce (k’, v’) → <k’, v’>* All values with the same key are sent to the same reducer The execution framework handles everything else…
21 .map map map map Shuffle and Sort: aggregate values by keys reduce reduce reduce k 1 k 2 k 3 k 4 k 5 k 6 v 1 v 2 v 3 v 4 v 5 v 6 b a 1 2 c c 3 6 a c 5 2 b c 7 8 a 1 5 b 2 7 c 2 3 6 8 r 1 s 1 r 2 s 2 r 3 s 3
22 .MapReduce Programmers specify two functions: map (k, v) → <k’, v’>* reduce (k’, v’) → <k’, v’>* All values with the same key are sent to the same reducer The execution framework handles everything else… What’s “everything else”?
23 .MapReduce “Runtime” Handles scheduling Assigns workers to map and reduce tasks Handles “data distribution” Moves processes to data Handles synchronization Gathers, sorts, and shuffles intermediate data Handles errors and faults Detects worker failures and restarts Everything happens on top of a distributed FS
24 .MapReduce Programmers specify two functions: map (k, v) → <k’, v’>* reduce (k’, v’) → <k’, v’>* All values with the same key are reduced together The execution framework handles everything else… Not quite…usually, programmers also specify: partition (k’, number of partitions) → partition for k’ Often a simple hash of the key, e.g., hash(k’) mod n Divides up key space for parallel reduce operations combine (k’, v’) → <k’, v’>* Mini-reducers that run in memory after the map phase Used as an optimization to reduce network traffic
25 .combine combine combine combine b a 1 2 c 9 a c 5 2 b c 7 8 partition partition partition partition map map map map k 1 k 2 k 3 k 4 k 5 k 6 v 1 v 2 v 3 v 4 v 5 v 6 b a 1 2 c c 3 6 a c 5 2 b c 7 8 Shuffle and Sort: aggregate values by keys reduce reduce reduce a 1 5 b 2 7 c 2 9 8 r 1 s 1 r 2 s 2 r 3 s 3 c 2 3 6 8
26 .Two more details… Barrier between map and reduce phases But we can begin copying intermediate data earlier Keys arrive at each reducer in sorted order No enforced ordering across reducers
27 .“Hello World”: Word Count
28 .MapReduce can refer to… The programming model The execution framework (aka “runtime”) The specific implementation Usage is usually clear from context!
29 .MapReduce Implementations Google has a proprietary implementation in C++ Bindings in Java, Python Hadoop is an open-source implementation in Java Original development led by Yahoo Now an Apache open source project Emerging as the de facto big data stack Rapidly expanding software ecosystem Lots of custom research implementations For GPUs, cell processors, etc. Includes variations of the basic programming model Most of these slides are focused on Hadoop