Spark’s goal was to generalize MapReduce to support new applications within the same engine
Two additions:
Fast data sharing
General DAGs (directed acyclic graphs)
Best of both worlds: easy to program & more efficient engine in general

注脚

展开查看详情

1.Introduction to Spark Shannon Quinn (with thanks to Paco Nathan and Databricks )

2.Quick Demo

3.Quick Demo

4.API Hooks Scala / Java All Java libraries *.jar http://www.scala-lang.org Python Anaconda: https://store.continuum.io/cshop/anaconda /

5.Introduction

6.Spark Structure Start Spark on a cluster Submit code to be run on it

7.

8.

9.

10.

11.

12.

13.

14.

15.

16.Another Perspective

17.Step by step

18.Step by step

19.Step by step

20.Example: WordCount

21.Example: WordCount

22.Limitations of MapReduce Performance bottlenecks—not all jobs can be cast as batch processes Graphs? Programming in Hadoop is hard Boilerplate boilerplate everywhere

23.Initial Workaround: Specialization

24.Along Came Spark Spark’s goal was to generalize MapReduce to support new applications within the same engine Two additions: Fast data sharing General DAGs (directed acyclic graphs) Best of both worlds: easy to program & more efficient engine in general

25.Codebase Size

26.More on Spark More general Supports map/reduce paradigm Supports vertex-based paradigm General compute engine (DAG) More API hooks Scala , Java, and Python More interfaces Batch ( Hadoop ), real-time (Storm), and interactive (???)

27.Interactive Shells Spark creates a SparkContext object (cluster information) For either shell: sc External programs use a static constructor to instantiate the context

28.Interactive Shells spark-shell -- master

29.Interactive Shells Master connects to the cluster manager, which allocates resources across applications Acquires executors on cluster nodes: worker processes to run computations and store data Sends app code to executors Sends tasks for executors to run

30.Resilient Distributed Datasets (RDDs) R esilient D istributed D atasets (RDDs) are primary data abstraction in Spark Fault-tolerant Can be operated on in parallel Parallelized Collections Hadoop datasets Two types of RDD operations Transformations (lazy) Actions (immediate)

31.Resilient Distributed Datasets (RDDs)

32.Resilient Distributed Datasets (RDDs) Can create RDDs from any file stored in HDFS Local filesystem Amazon S3 HBase Text files, SequenceFiles , or any other Hadoop InputFormat Any directory or glob /data/201414*

33.Resilient Distributed Datasets (RDDs) Transformations Create a new RDD from an existing one Lazily evaluated: results are not immediately computed Pipeline of subsequent transformations can be optimized Lost data partitions can be recovered

34.Resilient Distributed Datasets (RDDs) Transformations Create a new RDD from an existing one Lazily evaluated: results are not immediately computed Pipeline of subsequent transformations can be optimized Lost data partitions can be recovered

35.Resilient Distributed Datasets (RDDs) Transformations Create a new RDD from an existing one Lazily evaluated: results are not immediately computed Pipeline of subsequent transformations can be optimized Lost data partitions can be recovered

36.Resilient Distributed Datasets (RDDs) Transformations Create a new RDD from an existing one Lazily evaluated: results are not immediately computed Pipeline of subsequent transformations can be optimized Lost data partitions can be recovered

37.Resilient Distributed Datasets (RDDs) Transformations Create a new RDD from an existing one Lazily evaluated: results are not immediately computed Pipeline of subsequent transformations can be optimized Lost data partitions can be recovered

38.Closures in Java

39.Resilient Distributed Datasets (RDDs) Actions Create a new RDD from an existing one Eagerly evaluated: results are immediately computed Applies previous transformations (cache results?)

40.Resilient Distributed Datasets (RDDs) Actions Create a new RDD from an existing one Eagerly evaluated: results are immediately computed Applies previous transformations (cache results?)

41.Resilient Distributed Datasets (RDDs) Actions Create a new RDD from an existing one Eagerly evaluated: results are immediately computed Applies previous transformations (cache results?)

42.Resilient Distributed Datasets (RDDs) Actions Create a new RDD from an existing one Eagerly evaluated: results are immediately computed Applies previous transformations (cache results?)

43.Resilient Distributed Datasets (RDDs) Spark can persist / cache an RDD in memory across operations Each slice is persisted in memory and reused in subsequent actions involving that RDD Cache provides fault-tolerance: if partition is lost, it will be recomputed using transformations that created it

44.Resilient Distributed Datasets (RDDs) Spark can persist / cache an RDD in memory across operations Each slice is persisted in memory and reused in subsequent actions involving that RDD Cache provides fault-tolerance: if partition is lost, it will be recomputed using transformations that created it

45.Resilient Distributed Datasets (RDDs) Spark can persist / cache an RDD in memory across operations Each slice is persisted in memory and reused in subsequent actions involving that RDD Cache provides fault-tolerance: if partition is lost, it will be recomputed using transformations that created it

46.Broadcast Variables Spark’s version of Hadoop’s DistributedCache Read-only variable cached on each node Spark [internally] distributed broadcast variables in such a way to minimize communication cost

47.Broadcast Variables

48.Accumulators Spark’s version of Hadoop’s Counter Variables that can only be added through an associative operation Native support of numeric accumulator types and standard mutable collections Users can extend to new types Only driver program can read accumulator value

49.Accumulators

50.Key/Value Pairs

51.Resources Original slide deck : http://cdn.liber118 .com/workshop/ itas_workshop.pdf Code samples: https://gist.github.com /ceteri/ f2c3486062c9610eac1d https://gist.github.com /ceteri/ 8ae5b9509a08c08a1132 https://gist.github.com /ceteri/ 11381941