Energy-Efficient Inference Accelerator for Memory-Augmented Neural Networks on an FPGA

Seongsik Park, Jaehee Jang, Seijoon Kim and Sungroh Yoona
Seoul National University, Seoul, Korea
asryoon@snu.ac.kr

ABSTRACT


Memory-augmented neural networks (MANNs) are designed for question-answering tasks. It is difficult to run a MANN effectively on accelerators designed for other neural networks (NNs), in particular on mobile devices, because MANNs require recurrent data paths and various types of operations related to external memory access. We implement an accelerator for MANNs on a field-programmable gate array (FPGA) based on a data flow architecture. Inference times are also reduced by inference thresholding, which is a data-based maximum innerproduct search specialized for natural language tasks. Measurements on the bAbI data show that the energy efficiency of the accelerator (FLOPS/kJ) was higher than that of an NVIDIA TITAN V GPU by a factor of about 125, increasing to 140 with inference thresholding.

Keywords: Deep learning, Memory-augmented neural networks, Inference accelerator, FPGA, Data-based maximum-inner product search, Question and answer.



Full Text (PDF)