Abstract:
“Scaling up the performance of Python applications without having to resort to writing the performance-critical sections in native code can be challenging. Numba is a tool that solves this problem by JIT-compiling user-selected Python functions using LLVM to deliver execution speed on a par with languages more traditionally used in scientific computing, such as Fortran and C++. As well as supporting CPU targets, Numba includes CUDA and HSA GPU backends that allow offloading of vectorised operations with little programmer effort. For more complicated GPU workloads, Numba provides similar capabilities to CUDA C within Python, and debugging tools that integrate with Python debugging tools such as pdb and pudb.
This talk discusses the implementation of Numba and provides guidance for getting the best performance out of Numba-compiled code. Some examples of real-world applications that use Numba will be presented.”
Speaker bio: Graham Markall came to Imperial for our MSc in Advanced Computing in 2008, and stayed to do a PhD on “Multilayered Abstractions for Partial Differential Equations” – work which formed the foundation for the Firedrake Project (http://firedrakeproject.org/). Since graduating, Graham has worked at OpenGamma and Continuum Analytics, and is currently a compiler engineer at Embecosm.