Section outline

  • HLRS logoMost HPC systems are clusters of shared memory nodes and many use accelerators, e.g. GPUs. To use such systems efficiently, both, memory consumption and communication time, has to be optimized. Therefore, hybrid programming may combine the distributed memory parallelization on the node interconnect (e.g., with MPI) with the shared memory parallelization inside of each node (e.g., with OpenMP or MPI-3.0 shared memory).

    This course analyzes the strengths and weaknesses of several parallel programming models on clusters of SMP nodes with and without accelerators. Multi-socket-multi-core systems, with and without accelerators, in highly parallel environments are given special consideration. In addition, we will review the shared memory programming interface introduced in MPI-3.0, which can be combined with inter-node MPI communication.

    Numerous case studies and micro-benchmarks demonstrate the performance-related aspects of hybrid programming. Hands-on sessions are included on all days. Tools for hybrid programming such as thread/process placement support and performance analysis are presented in a "how-to" section.

    This course provides scientific training in Computational Science and, in addition, the scientific exchange of the participants among themselves.

    This course is a joint training event of SIDE and EuroCC-Austria, the German and Austrian National Competence Centres for High-Performance Computing. It is organized by the HLRS in cooperation with the ASC Research Center, TU Wien and NHR@FAU.

    Agenda & Content

    1st day – Tuesday, 10 February 2026

    10:45   Join in
    11:00      Welcome
    11:15      Introduction to Hybrid Programming in HPC – MPI+X
    11:45      Programming Models
    11:50         - MPI + MPI-3.0 Shared Memory
    12:30   Lunch
    14:00         - MPI + OpenMP lecture and hands-on - how to compile and start
    15:15   Break
    15:30         - MPI + OpenMP lecture and hands-on - how to do pinning
    16:45     Q & A
    17:00   End of first day

    2nd day – Wednesday, 11 February 2026

    08:45   Join in
    09:00         - continue: MPI + OpenMP
    09:00         - Case study: Simple 2D stencil smoother
    09:30            Hands-on - hybrid through OpenMP parallelization
    10:45   Break
    11:00         - Overlapping Communication and Computation
    11:30            Hands-on - taskloops
    12:15         - MPI + OpenMP Conclusions
    12:30   Lunch
    14:00         - MPI + Accelerators lecture and hands-on
    15:15   Break
    15:30         - MPI + Accelerators lecture and hands-on
    16:45           Q & A
    17:00   End of second day

    3rd day – Thursday, 12 February 2026

    08:45   Join in
    09:00      Programming Models (continued)
    09:05         - MPI + Accelerators lecture and hands-on
    10:30   Break
    10:45         - MPI + Accelerators lecture and hands-on
    11:45   Break
    12:00      Conclusions, Q & A, Feedback
    13:00   End of third day (course)

    Date: Tuesday, February 10, 2026, 10:45 - Thursday, February 12, 2026, 13:00
    Location:  HLRS, Room 0.439 / Rühle Saal, University of Stuttgart, Nobelstr. 19, D-70569 Stuttgart, Germany

    Lecturers:

    Tobias Haas (HLRS), Claudia Blaas-Schenner (ASC Team, TU Wien), Georg Hager (NHR@FAU)

    Course material (here ☺):

    http://tiny.cc/MPIX-HLRS