
This online course is designed for students and scientists with interest in parallel programming with MPI. It provides a thorough introduction to MPI, which is the most widespread parallelization paradigm in high performance computing (HPC).
Topics covered by this course:
- Basic principles of distributed-memory computer architecture and the Message Passing Interface (MPI)
- Blocking and non-blocking point-to-point communication
- Blocking and non-blocking collective communication
- Derived data types
- Subcommunicators, intercommunicators
- Performance issues
Exercises:
- 9 exercises with a variety of difficulty
- All exercises are already MPI parallel; however, they are not compilable/runnable because markers have been placed in the code. To solve the exercises, you are supposed to fix them by replacing the markers with correct code.
- Commands in the exercise descriptions are in
silver
background color and they are prepended by a $
on each line.
Lecturers: Dr. Alireza Ghasemi and Dr. Georg Hager (NHR@FAU)
