[ONLINE] Python for HPC 2

Europe/Prague
ZOOM (ONLINE)

ZOOM

ONLINE

Description

Annotation

This training introduces participants to Python for high-performance computing (HPC), covering Native Code Integration, Dask Distributed, and HPC utilization with Ray. Designed for researchers and developers, the course features hands-on lab sessions to strengthen practical skills and deepen understanding through real-world applications.

Benefits for the attendees, what will they learn

  • Understanding of Python’s role in high-performance computing (HPC)
  • Hands-on experience with lab exercises for practical skills
  • Knowledge of Native Code Integration, Dask Distributed, and Ray for scalable computing
  • Practical insights into optimizing and parallelizing Python applications for HPC environments

Level

Beginner

Language

English

Prerequisites

Experience with programming in Python.

Technical requirements: 

  • Python and its dependencies
  • Jupyter Notebook for interactive coding

HPC resources

GPU

Tutor

Ghaith Chaabane, Ph.D., is a Researcher at the Advanced Data Analysis and Simulation Laboratory within the IT4Innovations National Supercomputing Centre.

Acknowledgements

Python for HPC: ASC Public

LUMI AI Factory is funded jointly by the EuroHPC Joint Undertaking, through the European Union's Connecting Europe Facility and the Horizon 2020 research and innovation programme, as well as Finland, the Czech Republic, Poland, Estonia, Norway, and Denmark.

 

This course was supported by the Ministry of Education, Youth and Sports of the Czech Republic through the e-INFRA CZ (ID:90254).

 

All presentations and educational materials of this course are provided under the Creative Commons Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Surveys
Satisfaction survey
    • 09:00 09:30
      Welcome and Introduction

      Overview of the workshop objectives.

    • 09:30 10:30
      Native Code Integration

      Integrating native code in Python enables developers to leverage high-performance libraries or custom compiled code for faster execution of computationally intensive tasks.

    • 10:30 10:45
      Coffee Break
    • 10:45 12:00
      Dask Distributed

      Participants will learn to use Dask for parallel and distributed computing in Python. They will practice scaling computations from a single machine to clusters using task graphs.

    • 12:00 13:00
      Lunch Break
    • 13:00 14:00
      Ray

      Participants will explore Ray, learning how to build and scale Python and machine learning applications efficiently across multiple cores or distributed systems.

    • 14:00 15:00
      Deep Speed

      Participants will use DeepSpeed to train large-scale deep learning models efficiently.

    • 15:00 15:15
      Coffee Break 15m
    • 15:15 16:45
      Rapids

      Participants will use RAPIDS to accelerate data science workflows on GPUs.

    • 16:45 17:00
      Q&A and Closing