- 快召唤伙伴们来围观吧
- 微博 QQ QQ空间 贴吧
- 文档嵌入链接
- 复制
- 微信扫一扫分享
- 已成功复制到剪贴板
08-PPMI: Basic Building Blocks
展开查看详情
1 .CSC2/458 Parallel and Distributed Systems PPMI: Basic Building Blocks Sreepathi Pai February 13, 2018 URCS
2 .Outline Multiprocessor Machines Archetypes of Work Distribution Multiprocessing Multithreading and POSIX Threads Non-blocking I/O or ‘Asynchronous’ Execution
3 .Outline Multiprocessor Machines Archetypes of Work Distribution Multiprocessing Multithreading and POSIX Threads Non-blocking I/O or ‘Asynchronous’ Execution
4 .Very Simplified Programmer’s View of Multicore core core core core core PC PC PC PC PC • Multiple program counters (PC) • MIMD machine • To what do we set these PCs? • Can the hardware do this automatically for us?
5 .Automatic Parallelization to the Rescue? for(i = 0; i < N; i++) { for(j = 0; j < i; j++) { // something(i, j) } } • Assume a stream of instructions from a single-threaded program • How do we split this stream into pieces?
6 .Thread-Level Parallelism • Break stream into long continuous streams of instructions • Much bigger than issue window on superscalars • 8 instructions vs hundreds • Streams are largely independent • Best performance on current hardware • “Thread-Level Parallelism” • ILP • DLP • MLP
7 .Parallelization Issues • Assume we have a parallel algorithm • Work Distribution • How to split up work to be performed among threads? • Communication • How to send and receive data between threads? • Synchronization • How to coordinate different threads? • A form of communication
8 .Outline Multiprocessor Machines Archetypes of Work Distribution Multiprocessing Multithreading and POSIX Threads Non-blocking I/O or ‘Asynchronous’ Execution
9 .Types of Parallel Programs (Simplified) Let’s assume all parallel programs consist of “atomic” tasks. • All tasks identical, all perform same amount of work • Count words per page (many pages) • Matrix Multiply, 2D-convolution, most “regular” programs • All tasks identical, but perform different amounts of work • Count words per chapter (many chapters) • Graph analytics, most “irregular” programs • Different tasks • Pipelines • Servers (Tasks: Receive Request, Process Request, Respond to Request)
10 .Scheme 1: One task per thread, same work Count words per page of a book.
11 .Work assigned once to threads (Static)
12 .How many threads? • As many as tasks • As many as cores • Less than cores • More than cores
13 .Hardware and OS Limitations • As many as tasks • Too many, OS scheduler limitations • As many as cores • Reasonable default • Less than cores • If hardware bottleneck is saturated • More than cores • May help to cope with lack of ILP
14 .Scheme 2: Multiple tasks per thread, differing work Count words per chapter of a book.
15 .Static Work Assignment Assign chapters evenly.
16 .Static Work Assignment Chapters are of different lengths leading to load imbalance
17 .Assigning chapters by size of chapters • Not always possible • May not know size of all chapters • Bin-packing problem • NP-hard
18 .Dynamic (Greedy) Balancing • Create a set of worker threads (thread pool) • Place work (i.e. chapters) into a parallel worklist • Each worker thread pulls work off the worklist • When it finishes a chapter, it pulls more work off the worklist
19 .Dynamic Balancing Parallel Worklist
20 .Generalized Parallel Programs • Threads can create additional work (“tasks”) • Tasks may be dependent on each other • Form a dependence graph • Same ideas as thread pool • Except only “ready” tasks are pulled off worklist • As tasks finish, their dependents are marked ready • May have thread-specific worklists • To prevent contention on main worklist
21 .Outline Multiprocessor Machines Archetypes of Work Distribution Multiprocessing Multithreading and POSIX Threads Non-blocking I/O or ‘Asynchronous’ Execution
22 .Multiprocessing • Simplest way to take advantage of multiple cores • Run multiple processes • fork and wait • Traditional way in Unix • “Processes are cheap” • Not cheap in Windows • Nothing-shared model • Child inherits some parent state • Only viable model available in some programming languages • Python • Shared nothing: Communication between processes?
23 .Communication between processes • Unix Interprocess Communication (IPC) • Filesystem • Pipes (anonymous and named) • Unix sockets • Semaphores • SysV Shared Memory
24 .Outline Multiprocessor Machines Archetypes of Work Distribution Multiprocessing Multithreading and POSIX Threads Non-blocking I/O or ‘Asynchronous’ Execution
25 .Multithreading • One process • Process creates threads (“lightweight processes”) • How is a thread different from a process? [What minimum state does a thread require?] • Everything shared model • Communication • Read and write to memory • Relies on programmers to think carefully about access to shared data • Tricky
26 .Multithreading Programming Models Roughly in (decreasing) order of power and complexity: • POSIX threads (pthreads) • C++11 threads may be simpler than this • Thread Building Blocks from Intel • Cilk • OpenMP
27 .POSIX Threads on Linux • Processes == Threads for scheduler in Linux • 1:1 threading model • See OS textbook • pthreads provided as a library • gcc test.c -lpthread • OS scheduler can affect performance significantly • Especially with user-level threads
28 .Multithreading Components • Thread Management • Creation, death, waiting, etc. • Communication • Shared variables (ordinary variables) • Condition Variables • Synchronization • Mutexes (Mutual Exclusion) • Barriers • (Hardware) Read-Modify-Writes or “Atomics”
29 .Outline Multiprocessor Machines Archetypes of Work Distribution Multiprocessing Multithreading and POSIX Threads Non-blocking I/O or ‘Asynchronous’ Execution