Hardware Acceleration of Fully Quantized BERT for Efficient Natural Language Processing

Zejian Liu1,2,a, Gang Li1,b and Jian Cheng1,2,c
1National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences
2School of Future Technology, University of Chinese Academy of Sciences Beijing, China
aliuzejian2018@ia.ac.cn
bgang.li@nlpr.ia.ac.cn
cjcheng@nlpr.ia.ac.cn

ABSTRACT


BERT is the most recent Transformer-based model that achieves state-of-the-art performance in various NLP tasks. In this paper, we investigate the hardware acceleration of BERT on FPGA for edge computing. To tackle the issue of huge computational complexity and memory footprint, we propose to fully quantize the BERT (FQ-BERT), including weights, activations, softmax, layer normalization, and all the intermediate results. Experiments demonstrate that the FQ-BERT can achieve 7.94×compression for weights with negligible performance loss. We then propose an accelerator tailored for the FQ-BERT and evaluate on Xilinx ZCU102 and ZCU111 FPGA. It can achieve a performance-per-watt of 3.18 fps/W, which is 28.91× and 12.72× over Intel(R) Core(TM) i7-8700 CPU and NVIDIA K80 GPU, respectively.



Full Text (PDF)