Imperial College London

DrPeterVincent

Faculty of EngineeringDepartment of Aeronautics

Reader in Aeronautics
 
 
 
//

Contact

 

+44 (0)20 7594 1975p.vincent

 
 
//

Location

 

211City and Guilds BuildingSouth Kensington Campus

//

Summary

 

Publications

Citation

BibTex format

@article{Wozniak:2016:10.1016/j.cpc.2015.12.012,
author = {Wozniak, BD and Witherden, FD and Russell, FP and Vincent, PE and Kelly, PHJ},
doi = {10.1016/j.cpc.2015.12.012},
journal = {Computer Physics Communications},
pages = {12--22},
title = {GiMMiK - Generating Bespoke Matrix Multiplication Kernels for Accelerators: Application to High-Order Computational Fluid Dynamics},
url = {http://dx.doi.org/10.1016/j.cpc.2015.12.012},
volume = {202},
year = {2016}
}

RIS format (EndNote, RefMan)

TY  - JOUR
AB - Matrix multiplication is a fundamental linear algebra routineubiquitous in all areas of science and engineering. Highly optimised BLAS libraries (cuBLAS and clBLAS on GPUs) are the most popular choices for an implementation of the General Matrix Multiply (GEMM) in software. In this paper we present GiMMiK - a generator of bespoke matrix multiplication kernels for the CUDA and OpenCL platforms. GiMMiK exploits a prior knowledge of the operator matrix to generate highly performant code. The performance of GiMMiK's kernels is particularly apparent in a block-bypanel type of matrix multiplication, where the block matrix is typically small (e.g. dimensions of 96 × 64). Such operations are characteristic to our motivating application in PyFR - an implementation of Flux Reconstruction schemes for high-order fluid flow simulations on mixed unstructured meshes. GiMMiK fully unrolls the matrix-vector product and embeds matrix entries directly in the code to benefit from the use of the constant cache and compiler optimisations. Further, it reduces the number of floating-point operations by removing multiplications by zeros. Together with the ability of our kernels to avoid the poorly optimised cleanup code, executed by library GEMM, we are able to outperform cuBLAS on two NVIDIA GPUs: GTX 780 Ti and Tesla K40c. We observe speedups of our kernels over cuBLAS GEMM of up to 9.98 and 63.30 times for a 294 × 1029 99% sparse PyFR matrix in double precision on the Tesla K40c and GTX 780 Ti correspondingly. In single precision, observed speedups reach 12.20 and 13.07 times for a 4 × 8 50% sparse PyFR matrix on the two aforementioned cards. Using GiMMiK as the matrix multiplication kernel provider allows us to achieve a speedup of up to 1.70 (2.19) for a simulation of an unsteady flow over a cylinder executed with PyFR in double (single) precision on the Tesla K40c. All results were generated with GiMMiK version 1.0.
AU - Wozniak,BD
AU - Witherden,FD
AU - Russell,FP
AU - Vincent,PE
AU - Kelly,PHJ
DO - 10.1016/j.cpc.2015.12.012
EP - 22
PY - 2016///
SN - 0010-4655
SP - 12
TI - GiMMiK - Generating Bespoke Matrix Multiplication Kernels for Accelerators: Application to High-Order Computational Fluid Dynamics
T2 - Computer Physics Communications
UR - http://dx.doi.org/10.1016/j.cpc.2015.12.012
UR - http://hdl.handle.net/10044/1/28821
VL - 202
ER -