Annotation
Automating benchmarks is important for reproducibility and, hence, comparability, which is the major intent when performing benchmarks. Furthermore, managing different and frequently changing combinations of parameters is error-prone and often results in significant amounts of work, especially if the parameter space gets large. JUBE alleviates these problems by performing and analyzing benchmarks in a uniform and automated way. It allows custom workflows to be able to adapt to any architecture or setup. As such, this webinar addresses developers and users of HPC applications or anyone who is looking to automate an HPC workflow (including preparing data, building binaries, running HPC jobs, analyzing results, and storing results in databases).
Benefits for the attendees, what will they learn:
- Automated and standardized workflow and benchmark generation with JUBE for reproducible and documented results.
- Thorough overview of JUBE features and best practices.
- Understanding of use-cases and limitations of JUBE.
- Benefits of JUBE for IO-SEA (and other projects).
Level
All - (advanced JUBE users may skip the “Basic Features” part)
Language
English
Prerequisites
- Knowledge of XML or YAML
- Basic knowledge of Python and HPC Jobscripts (slurm) helpful but not required
Agenda and Content of the Webinar
13:00 – 13:30 Introduction to JUBE and its application in IO-SEA
13:30 – 14:15 Basic Features
14:15 – 14:30 Break
14:30 – 14:45 (Some) Advanced Features
14:45 – 15:30 Advanced JUBE scripts with real examples
15:30 – 16:00 Questions/Discussions
About the tutors
Yannik Müller, HPC Software engineer, working for (Supercomputing) Application Support Forschungszentrum Jülich (FZJ). Studied Computer Science (M.Sc.) in RWTH Aachen (2014-2019). Worked as a Software developer for SCOOP (2019-2020). After that joined the FZJ as Scientific Assistant in the Application Support division, where he develops software tools (in-house and open source) such as LinkTest, LLview and JUBE and participates in IO-/DEEP-/RED-SEA in Benchmarking and Monitoring Tasks, especially in the Benchmarking tasks, he has gained a lot of experience with JUBE. Besides writing many JUBE scripts and helping other Use-Cases to do so, he has also contributed feature ideas, documentation, and merge requests to the JUBE development.
Thomas Breuer, studied applied mathematics & computer science at the University of Applied Sciences Aachen and obtained his Master's degree in 2014. After university, he participated in developing the climate model MESSy/CLaMS and satellite data visualisation. In 2015 he has joined the Jülich Supercomputing Centre at Forschungszentrum Jülich as part of the application support division within the algorithm tools and method lab "application optimization and user service tools". Besides his involvement in national and European funded research projects, his current focus is the consultation of energy system modelling on High Performance Computing. He is experienced in benchmarking various HPC platforms, optimizing scientific applications, training activities for various topics, and software tool development as well as general user support. He is currently the main developer of JUBE.
Acknowledgements
This work was supported by the IO-SEA project. This project has received funding from the European High-Performance Computing Joint Undertaking (JU) under grant agreement No 955811. The JU receives support from the European Union’s Horizon 2020 research and innovation programme and France, Germany, the United Kingdom, Ireland, the Czech Republic, and Sweden. This project has received funding from the Ministry of Education, Youth and Sports of the Czech Republic (ID: MC2105).
This course was supported by the Ministry of Education, Youth and Sports of the Czech Republic through the e-INFRA CZ (ID:90254).