Parallel Programming with OpenMP and MPI, Winter Term 2020/21
Weekly outline
-
Parallel programming is essential for solving numerical problems that are too hard for sequential programs. This course covers parallel programming with OpenMP and MPI. OpenMP is based on compiler directives and is the dominant parallel programming paradigm for shared-memory systems, while the Message Passing Interface (MPI) is a library standard that enables writing parallel programs on distributed-memory computers. After a basic introduction to parallel computer hardware (multicore CPUs, clusters, supercomputers) and the theory of parallel computing, OpenMP and MPI are introduced using simple examples and application scenarios from computational science: linear algebra primitives, parallel integration, parallel solvers, etc. The performance aspects and possible correctness and performance pitfalls in parallel programming are covered in detail. Practical exercises enable students to apply all concepts in their own programs.
This is an online course. Two-hour pre-recorded lectures and exercises will be put online weekly (every Monday), with PDF slides available for download. Live Q&A sessions (via BBB) will be conducted weekly (default slot: Thursday 3pm).
The lectures and all slides will be in English. Live Q&A sessions can be in English or German, depending on the participants.
Lecturer: Dr. Georg Hager, Erlangen National High Performance Computing Center (NHR@FAU) and Institute of Physics, Universität Greifswald.
Course prerequisites: A working knowledge of programming in C, C++, or Fortran is required. In order to complete the exercise problems, participants must be able to handle a Linux command line and remote system access via SSH.
Tentative course outline:
- Basics of parallel computer architecture
- Shared-memory and distributed-memory systems
- Basics of parallel computing
- Parallel programming patterns
- Overview of parallelization standards
- Limits of parallel computing
- Amdahl's Law
- Communication and synchronization overhead
- Load imbalance
- Hardware bottlenecks
- Introduction to OpenMP
- Threading
- Loop parallelism and other worksharing constructs
- Synchronization
- Parallelism beyond loops: Tasking
- Correctness pitfalls: race conditions, deadlocks
- SIMD parallelism
- OpenMP Performance issues
- Topology and affinity issues
- Synchronization and serialization
- Other OpenMP overheads
- Fighting load imbalance
- Introduction to the Message Passing Interface (MPI)
- Basics of message passing
- Beginner's message passing toolbox
- Point-to-point communication
- Collective communication
- Parallelization pitfalls with MPI
- Advanced MPI
- Virtual topologies
- Derived data types
- One-sided MPI
- MPI I/O
- MPI performance issues
- Communication overhead
- Topology and affinity issues
- Unintended serialization
- Hybrid MPI+OpenMP parallel programming
- Why go hybrid?
- Affinity issues with hybrid programming
- Examples
-
Kick-off meeting for interested students via BBB
Date and time: Tuesday, October 13, 15:00 s.t.
-
Live Q&A: Thursday, October 22, 3:00 pm.