Individual lessons and their teaching materials are hosted here. You can access the site for each lesson at the links below.
August 4, 2017
High-performance computing with Python
Python is perhaps the most versatile programming language in existence, and sees widespread use in every field of modern computing. This tutorial focuses on Python for high-performance computing applications, and will include topics on performance optimization, parallel programming, and pipelining. The afternoon session will focus on using Python to (easily) write and scale massively parallel data analysis pipelines across a cluster.
Prerequisites Please come to the course with some variant of Python 3 installed as well as your favorite text editor / IDE.
August 4, 2017
Programming GPUs with CUDA
This is an introductory course covering programming and computing on GPUs — graphics processing units — which are an increasingly common presence in massively parallel computing architectures. The basics of GPU programming will be covered, and students will work through a number of hands on examples. The structuring of data and computations that makes full use of the GPU will be discussed in detail. This year the course will expand to cover the new features available on the GPUs installed on the Graham supercomputer at the University of Waterloo.
August 3, 2017
Distributed-memory programming with MPI
The MPI (Message Passing Interface) is widely used for programming parallel computers ranging from shared-memory servers to large clusters. This workshop is directed at current or prospective users of parallel computers who want to significantly improve the performance of their programs by “parallelizing” the code on a wide range of platforms.
The content of the course covers the basics of MPI programming. After a brief introduction to MPI, we talk about MPI fundamentals involving about a dozen MPI routines that are enough to familiarize users with the basic concepts of MPI programming.
August 3, 2017
High-performance computing with R
The R programming language has become the standard tool for data science, statistics, and bioinformatics. This course focuses on making your R code as fast as possible, including topics on performance optimization and parallelization. There will be a major emphasis on newer additions to the language, in particular, the “tidyverse” set of packages.
Prerequisites Please come to the workshop with both R and RStudio installed. You may also wish to pre-install the tidyverse set of R packages with install.
August 2, 2017
Introduction to Julia
Julia is a high-level, high-performance dynamic programming language for numerical computing. It provides a compiler, parallel execution, numerical accuracy, and a mathematical function library. It also integrates open source C and Fortran libraries for linear algebra, random number generation, signal processing, and string processing. It aims to combine the simplicity and accessibility of environments such as R and Python with the execution speed and efficiency of programming languages such as Fortran or C++.
August 2, 2017
Parallel programming with the POSIX thread library
This workshop is for programmers and scientists with a basic background in C programming, who want to increase the flexibility and responsiveness of their code and take advantage of modern multicore and multi-threaded computer architectures. It is an introduction to the Posix Thread Library and its application to the parallelization of C programs. We assume no prior knowledge of multithreading or parallel programming, but some background in Unix operating systems and programming in C will be necessary.
August 1, 2017
Introduction to Fortran
The Fortran programming language was one of the first “high level” languages, and it has been around since the 1950’s. Despite it being declared so many times, it is far from dead, and a great number of important technical computing packages are written and maintained in it. Fortran is specifically geared toward numerical computing and the development of scientific and engineering applications. Due to its structural simplicity, it also naturally supports efficient execution and thus is very suitable for “high-performance” computing.
August 1, 2017
Shared memory programming with OpenMP
The “multi-core revolution” has affected virtually any computer from large SMP machines in research centres and banks down to smart phones with dual-core processors. To exploit the enhanced capabilities of such systems as a programmer, it is necessary to learn the basic principles of shared-memory parallel programming, also termed “multi-threading”. The use of multi-threading has the potential to speed up virtually any application even on a single-core system due to greater responsiveness and more efficient use of modern CPU’s and memory.
July 31, 2017
Introduction to the Bash Shell
This class serves as an introduction to Linux, the UNIX-like operating system that runs on almost all high-performance computing systems. It is intended for users who have little or no experience with Unix or Linux. The focus is on the common bash shell. We cover material that helps the user to develop an understanding of the Linux command-line environment, which is necessary for successful use of Unix.
Topics covered include:
July 31, 2017
Introduction to High-Performance Computing
This tutorial is intended as an introduction to using high-performance computing resources provided by organizations like Compute Canada. This session will cover all of the information necessary to operate and use a compute cluster, including topics on running jobs, visualization, and software management.
Course materials Link to course website