Imperial College London

Professor Peter Y. K. Cheung

Faculty of EngineeringDyson School of Design Engineering

Head of the Dyson School of Design Engineering



+44 (0)20 7594 6200p.cheung Website




Mrs Wiesia Hsissen +44 (0)20 7594 6261




910BElectrical EngineeringSouth Kensington Campus






BibTex format

author = {Zhao, R and Liu, S and Ng, H and Wang, E and Davis, JJ and Niu, X and Wang, X and Shi, H and Constantinides, G and Cheung, P and Luk, W},
doi = {10.1109/ASAP.2018.8445088},
pages = {1--8},
publisher = {IEEE},
title = {Hardware Compilation of Deep Neural Networks: An Overview (invited)},
url = {},
year = {2018}

RIS format (EndNote, RefMan)

AB - Deploying a deep neural network model on a reconfigurable platform, such as an FPGA, is challenging due to the enormous design spaces of both network models and hardware design. A neural network model has various layer types, connection patterns and data representations, and the corresponding implementation can be customised with different architectural and modular parameters. Rather than manually exploring this design space, it is more effective to automate optimisation throughout an end-to-end compilation process. This paper provides an overview of recent literature proposing novel approaches to achieve this aim. We organise materials to mirror a typical compilation flow: front end, platform-independent optimisation and back end. Design templates for neural network accelerators are studied with a specific focus on their derivation methodologies. We also review previous work on network compilation and optimisation for other hardware platforms to gain inspiration regarding FPGA implementation. Finally, we propose some future directions for related research.
AU - Zhao,R
AU - Liu,S
AU - Ng,H
AU - Wang,E
AU - Davis,JJ
AU - Niu,X
AU - Wang,X
AU - Shi,H
AU - Constantinides,G
AU - Cheung,P
AU - Luk,W
DO - 10.1109/ASAP.2018.8445088
EP - 8
PY - 2018///
SP - 1
TI - Hardware Compilation of Deep Neural Networks: An Overview (invited)
UR -
UR -
UR -
ER -