Imperial College London

ProfessorWayneLuk

Faculty of EngineeringDepartment of Computing

Professor of Computer Engineering
 
 
 
//

Contact

 

+44 (0)20 7594 8313w.luk Website

 
 
//

Location

 

434Huxley BuildingSouth Kensington Campus

//

Summary

 

Publications

Citation

BibTex format

@article{Liang:2017:10.1016/j.neucom.2017.09.046,
author = {Liang, S and Yin, S and Liu, L and Luk, W and Wei, S},
doi = {10.1016/j.neucom.2017.09.046},
journal = {Neurocomputing},
pages = {1072--1086},
title = {FP-BNN: Binarized neural network on FPGA},
url = {http://dx.doi.org/10.1016/j.neucom.2017.09.046},
volume = {275},
year = {2017}
}

RIS format (EndNote, RefMan)

TY  - JOUR
AB - Deep neural networks (DNNs) have attracted significant attention for their excellent accuracy especially in areas such as computer vision and artificial intelligence. To enhance their performance, technologies for their hardware acceleration are being studied. FPGA technology is a promising choice for hardware acceleration, given its low power consumption and high flexibility which makes it suitable particularly for embedded systems. However, complex DNN models may need more computing and memory resources than those available in many current FPGAs. This paper presents FP-BNN, a binarized neural network (BNN) for FPGAs, which drastically cuts down the hardware consumption while maintaining acceptable accuracy. We introduce a Resource-Aware Model Analysis (RAMA) method, and remove the bottleneck involving multipliers by bit-level XNOR and shifting operations, and the bottleneck of parameter access by data quantization and optimized on-chip storage. We evaluate the FP-BNN accelerator designs for MNIST multi-layer perceptrons (MLP), Cifar-10 ConvNet, and AlexNet on a Stratix-V FPGA system. An inference performance of Tera opartions per second with acceptable accuracy loss is obtained, which shows improvement in speed and energy efficiency over other computing platforms.
AU - Liang,S
AU - Yin,S
AU - Liu,L
AU - Luk,W
AU - Wei,S
DO - 10.1016/j.neucom.2017.09.046
EP - 1086
PY - 2017///
SN - 0925-2312
SP - 1072
TI - FP-BNN: Binarized neural network on FPGA
T2 - Neurocomputing
UR - http://dx.doi.org/10.1016/j.neucom.2017.09.046
UR - http://hdl.handle.net/10044/1/56368
VL - 275
ER -