Block Convolution: Towards Memory-Efficient Inference of Large-Scale CNNs on FPGA

Gang Li1,2,a, Fanrong Li1,2, Tianli Zhao1 and Jian Cheng1,2,3,b
1National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences
2University of Chinese Academy of Sciences
3CAS Center for Excellence in Brain Science and Intelligence Technology Beijing, China
agang.li@nlpr.ia.ac.cn
bjcheng@nlpr.ia.ac.cn

ABSTRACT


FPGA-based CNN accelerators are gaining popularity due to high energy efficiency and great flexibility in recent years. However, as the networks grow in depth and width, the great volume of intermediate data is too large to store on chip, data transfers between on‐chip memory and off‐chip memory should be frequently executed, which leads to unexpected offchip memory access latency and energy consumption. In this paper, we propose a block convolution approach, which is a memory‐efficient, simple yet effective block‐based convolution to completely avoid intermediate data from streaming out to off‐chip memory during network inference. Experiments on the very large VGG‐16 network show that the improved top‐1/top‐5 accuracy of 72.60%/91.10% can be achieved on the ImageNet classification task with the proposed approach. As a case study, we implement the VGG‐16 network with block convolution on Xilinx Zynq ZC706 board, achieving a frame rate of 12.19fps under 150MHz working frequency, with all intermediate data staying on chip.



Full Text (PDF)