- Indico style
- Indico style - inline minutes
- Indico style - numbered
- Indico style - numbered + minutes
- Indico Weeks View
Message Passing Interface (MPI) is a dominant programming model on clusters and distributed memory architectures. This course is focused on its basic concepts. In particular, exchanging data by point-to-point and collective operations. Attendees will be able to immediately test and understand these constructs in hands-on sessions.
This course is based on a course developed by Rolf Rabenseifner from the High-Performance Computing Center Stuttgart (HLRS). His knowledge and material have been shared with various HPC training centres in Europe through the train the trainer program of HLRS, which is conducted annually.
After the course, attendees should be able to understand MPI applications and write their code.
60% beginner, 40% intermediate
English
Ondřej Meca holds a Ph.D. degree in Computer Science from VSB - Technical University of Ostrava, Czech Republic. He is currently a member of the Infrastructure Research Lab at IT4Innovations National Supercomputing Center. His research interests include verifying parallel algorithms, developing pre/post-processing algorithms for large-scale engineering problems, and developing highly-scalable linear solvers.
Kristian Kadlubiak is a researcher at the INFRA lab of IT4Innovations National Supercomputing Center where he is responsible for designing and developing various acceleration and optimization techniques in the flagship application ESPRESO. He specializes in parallel and vector processing, accelerator offloading, and performance tuning in general. He holds a master's degree in embedded and computer systems from the Brno University of Technology where he is partially involved as a Ph.D. student. In his studies, he is developing modifications of the Local Fourier Basis (LFB) method to adapt it for efficient use on HPC systems.
For the hands-on sessions, you should know how to work on the Unix/Linux command line and be able to program in either C/C++, Fortran, or Python.
To run before the course
On the below link please find files that contain a simple test application to verify the installation of an MPI library (see the README file). All you need is a C, C++, or Fortran compiler and an up-to-date MPI library. Use the link based on your archiving application:
TEST.tar.gz
TEST.zip
The slides will be needed during the exercises. Please download it from the below link. The file contains internal references that work best with Acrobat.
Please also download examples that will be needed during the course.
MPI31single.tar.gz
MPI31single.zip
Message-passing interface standard version 4.0 can be downloaded from https://www.mpi-forum.org/docs/mpi-4.0/mpi40-report.pdf
This project has received funding from the European High-Performance Computing Joint Undertaking (JU) under grant agreement No 101101903. The JU receives support from the Digital Europe Programme and Germany, Bulgaria, Austria, Croatia, Cyprus, Czech Republic, Denmark, Estonia, Finland, Greece, Hungary, Ireland, Italy, Lithuania, Latvia, Poland, Portugal, Romania, Slovenia, Spain, Sweden, France, Netherlands, Belgium, Luxembourg, Slovakia, Norway, Türkiye, Republic of North Macedonia, Iceland, Montenegro, Serbia. This project has received funding from the Ministry of Education, Youth, and Sports of the Czech Republic.
This course was supported by the Ministry of Education, Youth and Sports of the Czech Republic through the e-INFRA CZ (ID:90254).