Exercises will be done on CoolMUC2 @ LRZ with 28-way Haswell-based nodes and FDR14 Infiniband interconnect.

    Please open a terminal (or PuTTY) and login:
    (for more details see: https://doku.lrz.de/pages/viewpage.action?pageId=111607825):

•  ssh lxlogin1.lrz.de -l username

    Use username and password (provided by LRZ staff).

    Alternative login nodes:

    If your terminal becomes unresponsive, please hit the Return key, then type ~.
    (if it does not help, open a new terminal and connect again)!

    If you see a line after login like:
/usr/bin/manpath: can't set the locale; make sure $LC_* and $LANG are correct
    then please do the following (this needs to be done only once and after that you can ignore the warning):
             cat /lrz/sys/courses/hhyp1s22/bashrc_addons >> ~/.bashrc
             source ~/.bashrc


Copy the exercises (needed to be done only once):

    Hands-on labs are prepared for the course participants directly on the cluster.
    (A link to download the exercise material outside of the course is provided on the Moodle.)

•  cd ~; cp -a /lrz/sys/courses/hhyp1s22/HY-LRZ .

•  cd ~/HY-LRZ


~/HY-LRZ holds subdirectories for all exercises:

•  he-hy                                    - MPI+OpenMP: compiling, starting, pinning

•  jacobi                                    - MPI+OpenMP: hybrid through OpenMP parallelization

•  data-rep                               - MPI: how to avoid replicated data


Load the required modules:

    The Intel software stack is automatically loaded at login.

    The Intel compilers are called icc (for C), icpc (for C++) and ifort (for Fortran).

    For reasonable optimization uncluding SIMD vectorization, use options -O3 -xavx
    (sometimes you might get better performance with -O2 ...)

    By default, OpenMP directives in your code are ignored. Use the -qopenmp option to activate OpenMP. 


Compile @login node:

•  mpiicc    -o my-program.exe my-program.c         #   C           (example is for pure MPI; note the double ii)

•  mpiifort -o my-program.exe my-program.f90     #   Fortran (example is for pure MPI; note the double ii)


Submit to queuing system (and activate the node reservation):

    During the course, we provide a node reservation to avoid queuing times.
    (For general usage please omit the --reservation=hhyp1s22 line in the job scripts.)

  sbatch   job.sh                                                          #   submit a job

•  squeue   -M cm2                                                         #   check own jobs

•  scancel   -M cm2   jobid                                         #   cancel a job

•  output will be written to: slurm-*.out                    #   output

•  sinfo   -M cm2   --reservation                                 #   show reservation


  CoolMUC2 overview: https://doku.lrz.de/display/PUBLIC/CoolMUC-2

  Details: https://doku.lrz.de/display/PUBLIC/Running+parallel+jobs+on+the+Linux-Cluster

  Examples: https://doku.lrz.de/display/PUBLIC/Example+parallel+job+scripts+on+the+Linux-Cluster

  Ressource limits: https://doku.lrz.de/display/PUBLIC/Resource+limits+for+parallel+jobs+on+Linux+Cluster

Last modified: Tuesday, 21 June 2022, 4:55 PM