Topic outline

  • LogoThis online course is designed for students and scientists with interest in parallel programming with MPI. It provides a thorough introduction to MPI, which is the most widespread parallelization paradigm in high performance computing (HPC).

    Topics covered by this course:
    • Basic principles of distributed-memory computer architecture and the Message Passing Interface (MPI)
    • Blocking and non-blocking point-to-point communication
    • Blocking and non-blocking collective communication
    • Derived data types
    • Subcommunicators, intercommunicators
    • Performance issues

    Exercises:
    • 9 exercises with a variety of difficulty
    •  All exercises are already MPI parallel; however, they are not compilable/runnable because markers have been placed in the code. To solve the exercises, you are supposed to fix them by replacing the markers with correct code.
    • Commands in the exercise descriptions are in silver background color and they are prepended by a $ on each line.

    Lecturers: Dr. Alireza Ghasemi and Dr. Georg Hager (NHR@FAU)





  • Basics

  • Point-to-Point (blocking)

  • Point-to-Point (nonblocking)

  • Collectives (blocking)

  • Collective Operations

  • Collectives (nonblocking)

  • Derived Datatypes

  • Communicators

  • Odds & Ends, Performance Considerations

  • Misc info and links

  • Feedback


    Please use this link to give us your opinion about the course. It will take no more than a few minutes. Your feedback ist extremely valuable to us!

    https://survey.nhr.fau.de/index.php/959685?lang=en