Multi‐Precision Convolutional Neural Networks on Heterogeneous Hardware

Moslem Amiria, Mohammad Hosseinabadyb, Simon McIntosh‐Smithc and Jose Nunez‐Yanezd
Faculty of Engineering, University of Bristol, Bristol, UK
ama17215@bristol.ac.uk
bm.hosseinabady@bristol.ac.uk
cs.mcintosh-smith@bristol.ac.uk
dj.l.nunez-yanez@bristol.ac.uk

ABSTRACT


Fully binarised convolutional neural networks (CNNs) deliver very high inference performance using singlebit weights and activations, together with XNOR type operators for the kernel convolutions. Current research shows that full binarisation results in a degradation of accuracy and different approaches to tackle this issue are being investigated such as using more complex models as accuracy reduces. This paper proposes an alternative based on a multi‐precision CNN framework that combines a binarised and a floating point CNN in a pipeline configuration deployed on heterogeneous hardware. The binarised CNN is mapped onto an FPGA device and used to perform inference over the whole input set while the floating point network is mapped onto a CPU device and performs reinference only when the classification confidence level is low. A light-weight confidence mechanism enables a flexible trade‐off between accuracy and throughput. To demonstrate the concept, we choose a Zynq 7020 device as the hardware target and show that the multi‐precision network is able to increase the BNN accuracy from 78.5% to 82.5% and the CPU inference speed from 29.68 to 90.82 images/sec.

Keywords: Multi‐precision, performance, Convolutional Neural Network, deep learning, heterogeneous, FPGA, ARM, CIFAR‐10, inference.



Full Text (PDF)