GPU Computing in Python using PyCUDA
Learn how to benefit from GPUs within Python programs. This course introduces GPU hardware, CUDA, and PyCUDA, guiding you through practical examples to build your own parallel algorithms.
- Amsterdam Science Park, UvA campus, Building G, Room G2.10
In this course, you will learn how to benefit from GPUs within Python programs. You will learn how to implement your algorithms in Python to run on NVIDIA GPUs. We first introduce NVIDIA GPU hardware and CUDA programming model. Then we explain PyCUDA as an easy pythonic way to access to NVIDIA’s CUDA parallel computation API. We accomplish this goal by going through some examples, from simple vector addition, to more complicated matrix multiplication. We implement the examples from scratch all together!
This is a generic course to teach you about the GPU hardware and the workflow of GPU programs in order to implement your own customized algorithms using PyCUDA. Please note that this is NOT a machine learning course to introduce TensorFlow, PyTorch, or other ML frameworks: for this purpose, there is a separate course dedicated for ML given by SURF.
Prerequisites
- Basic knowledge of Python and use of Jupyter notebooks
Sign up now GPU Computing in Python using PyCUDA
- Amsterdam Science Park, UvA campus, Building G, Room G2.10